Test Report: KVM_Linux_crio 19876

                    
                      0db15b506654906b6081fade5258c34c52419f7c:2024-10-28:36841
                    
                

Test fail (32/314)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.26
38 TestAddons/parallel/MetricsServer 349.57
47 TestAddons/StoppedEnableDisable 154.46
166 TestMultiControlPlane/serial/StopSecondaryNode 141.83
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 6.06
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.14
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.41
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 413.86
173 TestMultiControlPlane/serial/StopCluster 142.39
233 TestMultiNode/serial/RestartKeepsNodes 333.1
235 TestMultiNode/serial/StopMultiNode 145.44
242 TestPreload 181.91
250 TestKubernetesUpgrade 356.1
256 TestPause/serial/SecondStartNoReconfiguration 91.29
287 TestStartStop/group/old-k8s-version/serial/FirstStart 297.56
294 TestStartStop/group/no-preload/serial/Stop 139.15
299 TestStartStop/group/embed-certs/serial/Stop 139.15
300 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 101.14
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.11
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 727.23
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.35
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.16
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.36
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.42
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 483.11
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 330.67
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 493.12
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 130.08
x
+
TestAddons/parallel/Ingress (151.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-892779 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-892779 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-892779 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2818a832-80db-43ce-ad06-1d48dd9ab54e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2818a832-80db-43ce-ad06-1d48dd9ab54e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009921926s
I1028 10:58:34.680803  140303 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-892779 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.743433903s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-892779 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-892779 -n addons-892779
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 logs -n 25: (1.325881425s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| delete  | -p download-only-553455                                                                     | download-only-553455 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| delete  | -p download-only-114118                                                                     | download-only-114118 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| delete  | -p download-only-553455                                                                     | download-only-553455 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-110570 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | binary-mirror-110570                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43021                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-110570                                                                     | binary-mirror-110570 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| addons  | enable dashboard -p                                                                         | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | addons-892779                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | addons-892779                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-892779 --wait=true                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:57 UTC | 28 Oct 24 10:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:57 UTC | 28 Oct 24 10:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:57 UTC | 28 Oct 24 10:57 UTC |
	|         | -p addons-892779                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-892779 ip                                                                            | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-892779 ssh cat                                                                       | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | /opt/local-path-provisioner/pvc-89c5613b-7edc-42a1-8a07-f72dc621843c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-892779 ssh curl -s                                                                   | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-892779 ip                                                                            | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 11:00 UTC | 28 Oct 24 11:00 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:55:19
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:55:19.338822  141007 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:55:19.338962  141007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:19.338974  141007 out.go:358] Setting ErrFile to fd 2...
	I1028 10:55:19.338979  141007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:19.339177  141007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 10:55:19.339814  141007 out.go:352] Setting JSON to false
	I1028 10:55:19.340729  141007 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2262,"bootTime":1730110657,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:55:19.340797  141007 start.go:139] virtualization: kvm guest
	I1028 10:55:19.343088  141007 out.go:177] * [addons-892779] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 10:55:19.345140  141007 notify.go:220] Checking for updates...
	I1028 10:55:19.345159  141007 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 10:55:19.346720  141007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:55:19.348489  141007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 10:55:19.349927  141007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:19.351444  141007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 10:55:19.353145  141007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 10:55:19.354877  141007 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:55:19.387632  141007 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 10:55:19.389160  141007 start.go:297] selected driver: kvm2
	I1028 10:55:19.389182  141007 start.go:901] validating driver "kvm2" against <nil>
	I1028 10:55:19.389195  141007 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 10:55:19.389982  141007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:55:19.390084  141007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 10:55:19.405550  141007 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 10:55:19.405607  141007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:55:19.405863  141007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 10:55:19.405898  141007 cni.go:84] Creating CNI manager for ""
	I1028 10:55:19.405939  141007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:55:19.405947  141007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 10:55:19.406007  141007 start.go:340] cluster config:
	{Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:55:19.406098  141007 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:55:19.407968  141007 out.go:177] * Starting "addons-892779" primary control-plane node in "addons-892779" cluster
	I1028 10:55:19.409604  141007 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:55:19.409657  141007 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:55:19.409665  141007 cache.go:56] Caching tarball of preloaded images
	I1028 10:55:19.409763  141007 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 10:55:19.409777  141007 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 10:55:19.410080  141007 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/config.json ...
	I1028 10:55:19.410102  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/config.json: {Name:mka098263b9c5fb67d1a426a55772f1cc3aa82ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:19.410267  141007 start.go:360] acquireMachinesLock for addons-892779: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 10:55:19.410315  141007 start.go:364] duration metric: took 33.953µs to acquireMachinesLock for "addons-892779"
	I1028 10:55:19.410332  141007 start.go:93] Provisioning new machine with config: &{Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 10:55:19.410394  141007 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 10:55:19.412274  141007 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 10:55:19.412424  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:55:19.412479  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:55:19.428108  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I1028 10:55:19.428696  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:55:19.429378  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:55:19.429401  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:55:19.429783  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:55:19.429987  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:19.430139  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:19.430294  141007 start.go:159] libmachine.API.Create for "addons-892779" (driver="kvm2")
	I1028 10:55:19.430328  141007 client.go:168] LocalClient.Create starting
	I1028 10:55:19.430372  141007 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 10:55:19.577405  141007 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 10:55:19.662594  141007 main.go:141] libmachine: Running pre-create checks...
	I1028 10:55:19.662617  141007 main.go:141] libmachine: (addons-892779) Calling .PreCreateCheck
	I1028 10:55:19.663165  141007 main.go:141] libmachine: (addons-892779) Calling .GetConfigRaw
	I1028 10:55:19.663569  141007 main.go:141] libmachine: Creating machine...
	I1028 10:55:19.663584  141007 main.go:141] libmachine: (addons-892779) Calling .Create
	I1028 10:55:19.663710  141007 main.go:141] libmachine: (addons-892779) Creating KVM machine...
	I1028 10:55:19.664912  141007 main.go:141] libmachine: (addons-892779) DBG | found existing default KVM network
	I1028 10:55:19.665694  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:19.665485  141029 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1028 10:55:19.665722  141007 main.go:141] libmachine: (addons-892779) DBG | created network xml: 
	I1028 10:55:19.665735  141007 main.go:141] libmachine: (addons-892779) DBG | <network>
	I1028 10:55:19.665743  141007 main.go:141] libmachine: (addons-892779) DBG |   <name>mk-addons-892779</name>
	I1028 10:55:19.665751  141007 main.go:141] libmachine: (addons-892779) DBG |   <dns enable='no'/>
	I1028 10:55:19.665761  141007 main.go:141] libmachine: (addons-892779) DBG |   
	I1028 10:55:19.665770  141007 main.go:141] libmachine: (addons-892779) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 10:55:19.665781  141007 main.go:141] libmachine: (addons-892779) DBG |     <dhcp>
	I1028 10:55:19.665791  141007 main.go:141] libmachine: (addons-892779) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 10:55:19.665806  141007 main.go:141] libmachine: (addons-892779) DBG |     </dhcp>
	I1028 10:55:19.665817  141007 main.go:141] libmachine: (addons-892779) DBG |   </ip>
	I1028 10:55:19.665827  141007 main.go:141] libmachine: (addons-892779) DBG |   
	I1028 10:55:19.665833  141007 main.go:141] libmachine: (addons-892779) DBG | </network>
	I1028 10:55:19.665838  141007 main.go:141] libmachine: (addons-892779) DBG | 
	I1028 10:55:19.671227  141007 main.go:141] libmachine: (addons-892779) DBG | trying to create private KVM network mk-addons-892779 192.168.39.0/24...
	I1028 10:55:19.739438  141007 main.go:141] libmachine: (addons-892779) DBG | private KVM network mk-addons-892779 192.168.39.0/24 created
	I1028 10:55:19.739475  141007 main.go:141] libmachine: (addons-892779) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779 ...
	I1028 10:55:19.739499  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:19.739395  141029 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:19.739601  141007 main.go:141] libmachine: (addons-892779) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 10:55:19.739635  141007 main.go:141] libmachine: (addons-892779) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 10:55:20.004738  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:20.004567  141029 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa...
	I1028 10:55:20.321771  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:20.321603  141029 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/addons-892779.rawdisk...
	I1028 10:55:20.321800  141007 main.go:141] libmachine: (addons-892779) DBG | Writing magic tar header
	I1028 10:55:20.321814  141007 main.go:141] libmachine: (addons-892779) DBG | Writing SSH key tar header
	I1028 10:55:20.321823  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:20.321724  141029 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779 ...
	I1028 10:55:20.321835  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779
	I1028 10:55:20.321944  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779 (perms=drwx------)
	I1028 10:55:20.321973  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 10:55:20.321985  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 10:55:20.321996  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:20.322008  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 10:55:20.322016  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 10:55:20.322022  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins
	I1028 10:55:20.322028  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home
	I1028 10:55:20.322036  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 10:55:20.322052  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 10:55:20.322066  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 10:55:20.322076  141007 main.go:141] libmachine: (addons-892779) DBG | Skipping /home - not owner
	I1028 10:55:20.322090  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 10:55:20.322101  141007 main.go:141] libmachine: (addons-892779) Creating domain...
	I1028 10:55:20.323218  141007 main.go:141] libmachine: (addons-892779) define libvirt domain using xml: 
	I1028 10:55:20.323245  141007 main.go:141] libmachine: (addons-892779) <domain type='kvm'>
	I1028 10:55:20.323252  141007 main.go:141] libmachine: (addons-892779)   <name>addons-892779</name>
	I1028 10:55:20.323257  141007 main.go:141] libmachine: (addons-892779)   <memory unit='MiB'>4000</memory>
	I1028 10:55:20.323263  141007 main.go:141] libmachine: (addons-892779)   <vcpu>2</vcpu>
	I1028 10:55:20.323271  141007 main.go:141] libmachine: (addons-892779)   <features>
	I1028 10:55:20.323277  141007 main.go:141] libmachine: (addons-892779)     <acpi/>
	I1028 10:55:20.323283  141007 main.go:141] libmachine: (addons-892779)     <apic/>
	I1028 10:55:20.323288  141007 main.go:141] libmachine: (addons-892779)     <pae/>
	I1028 10:55:20.323292  141007 main.go:141] libmachine: (addons-892779)     
	I1028 10:55:20.323297  141007 main.go:141] libmachine: (addons-892779)   </features>
	I1028 10:55:20.323301  141007 main.go:141] libmachine: (addons-892779)   <cpu mode='host-passthrough'>
	I1028 10:55:20.323308  141007 main.go:141] libmachine: (addons-892779)   
	I1028 10:55:20.323313  141007 main.go:141] libmachine: (addons-892779)   </cpu>
	I1028 10:55:20.323320  141007 main.go:141] libmachine: (addons-892779)   <os>
	I1028 10:55:20.323337  141007 main.go:141] libmachine: (addons-892779)     <type>hvm</type>
	I1028 10:55:20.323368  141007 main.go:141] libmachine: (addons-892779)     <boot dev='cdrom'/>
	I1028 10:55:20.323396  141007 main.go:141] libmachine: (addons-892779)     <boot dev='hd'/>
	I1028 10:55:20.323410  141007 main.go:141] libmachine: (addons-892779)     <bootmenu enable='no'/>
	I1028 10:55:20.323420  141007 main.go:141] libmachine: (addons-892779)   </os>
	I1028 10:55:20.323430  141007 main.go:141] libmachine: (addons-892779)   <devices>
	I1028 10:55:20.323441  141007 main.go:141] libmachine: (addons-892779)     <disk type='file' device='cdrom'>
	I1028 10:55:20.323465  141007 main.go:141] libmachine: (addons-892779)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/boot2docker.iso'/>
	I1028 10:55:20.323483  141007 main.go:141] libmachine: (addons-892779)       <target dev='hdc' bus='scsi'/>
	I1028 10:55:20.323495  141007 main.go:141] libmachine: (addons-892779)       <readonly/>
	I1028 10:55:20.323505  141007 main.go:141] libmachine: (addons-892779)     </disk>
	I1028 10:55:20.323515  141007 main.go:141] libmachine: (addons-892779)     <disk type='file' device='disk'>
	I1028 10:55:20.323528  141007 main.go:141] libmachine: (addons-892779)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 10:55:20.323544  141007 main.go:141] libmachine: (addons-892779)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/addons-892779.rawdisk'/>
	I1028 10:55:20.323555  141007 main.go:141] libmachine: (addons-892779)       <target dev='hda' bus='virtio'/>
	I1028 10:55:20.323576  141007 main.go:141] libmachine: (addons-892779)     </disk>
	I1028 10:55:20.323602  141007 main.go:141] libmachine: (addons-892779)     <interface type='network'>
	I1028 10:55:20.323613  141007 main.go:141] libmachine: (addons-892779)       <source network='mk-addons-892779'/>
	I1028 10:55:20.323620  141007 main.go:141] libmachine: (addons-892779)       <model type='virtio'/>
	I1028 10:55:20.323631  141007 main.go:141] libmachine: (addons-892779)     </interface>
	I1028 10:55:20.323646  141007 main.go:141] libmachine: (addons-892779)     <interface type='network'>
	I1028 10:55:20.323663  141007 main.go:141] libmachine: (addons-892779)       <source network='default'/>
	I1028 10:55:20.323674  141007 main.go:141] libmachine: (addons-892779)       <model type='virtio'/>
	I1028 10:55:20.323696  141007 main.go:141] libmachine: (addons-892779)     </interface>
	I1028 10:55:20.323706  141007 main.go:141] libmachine: (addons-892779)     <serial type='pty'>
	I1028 10:55:20.323714  141007 main.go:141] libmachine: (addons-892779)       <target port='0'/>
	I1028 10:55:20.323725  141007 main.go:141] libmachine: (addons-892779)     </serial>
	I1028 10:55:20.323737  141007 main.go:141] libmachine: (addons-892779)     <console type='pty'>
	I1028 10:55:20.323750  141007 main.go:141] libmachine: (addons-892779)       <target type='serial' port='0'/>
	I1028 10:55:20.323761  141007 main.go:141] libmachine: (addons-892779)     </console>
	I1028 10:55:20.323768  141007 main.go:141] libmachine: (addons-892779)     <rng model='virtio'>
	I1028 10:55:20.323849  141007 main.go:141] libmachine: (addons-892779)       <backend model='random'>/dev/random</backend>
	I1028 10:55:20.323879  141007 main.go:141] libmachine: (addons-892779)     </rng>
	I1028 10:55:20.323894  141007 main.go:141] libmachine: (addons-892779)     
	I1028 10:55:20.323903  141007 main.go:141] libmachine: (addons-892779)     
	I1028 10:55:20.323911  141007 main.go:141] libmachine: (addons-892779)   </devices>
	I1028 10:55:20.323921  141007 main.go:141] libmachine: (addons-892779) </domain>
	I1028 10:55:20.323932  141007 main.go:141] libmachine: (addons-892779) 
	I1028 10:55:20.328480  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:63:0d:71 in network default
	I1028 10:55:20.329082  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:20.329097  141007 main.go:141] libmachine: (addons-892779) Ensuring networks are active...
	I1028 10:55:20.329816  141007 main.go:141] libmachine: (addons-892779) Ensuring network default is active
	I1028 10:55:20.330156  141007 main.go:141] libmachine: (addons-892779) Ensuring network mk-addons-892779 is active
	I1028 10:55:20.330620  141007 main.go:141] libmachine: (addons-892779) Getting domain xml...
	I1028 10:55:20.331327  141007 main.go:141] libmachine: (addons-892779) Creating domain...
	I1028 10:55:21.556374  141007 main.go:141] libmachine: (addons-892779) Waiting to get IP...
	I1028 10:55:21.557046  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:21.557518  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:21.557575  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:21.557459  141029 retry.go:31] will retry after 300.808512ms: waiting for machine to come up
	I1028 10:55:21.860110  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:21.860725  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:21.860753  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:21.860687  141029 retry.go:31] will retry after 265.374853ms: waiting for machine to come up
	I1028 10:55:22.128294  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:22.128732  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:22.128754  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:22.128715  141029 retry.go:31] will retry after 428.941852ms: waiting for machine to come up
	I1028 10:55:22.559417  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:22.559864  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:22.559892  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:22.559804  141029 retry.go:31] will retry after 382.977845ms: waiting for machine to come up
	I1028 10:55:22.944439  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:22.944879  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:22.944906  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:22.944826  141029 retry.go:31] will retry after 464.717241ms: waiting for machine to come up
	I1028 10:55:23.411517  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:23.412060  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:23.412105  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:23.412003  141029 retry.go:31] will retry after 783.986977ms: waiting for machine to come up
	I1028 10:55:24.198089  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:24.198754  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:24.198778  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:24.198687  141029 retry.go:31] will retry after 893.564422ms: waiting for machine to come up
	I1028 10:55:25.094315  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:25.094658  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:25.094679  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:25.094621  141029 retry.go:31] will retry after 1.159093255s: waiting for machine to come up
	I1028 10:55:26.256081  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:26.256513  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:26.256536  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:26.256475  141029 retry.go:31] will retry after 1.171773821s: waiting for machine to come up
	I1028 10:55:27.429585  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:27.430183  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:27.430210  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:27.430147  141029 retry.go:31] will retry after 2.270421076s: waiting for machine to come up
	I1028 10:55:29.702478  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:29.702894  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:29.702927  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:29.702844  141029 retry.go:31] will retry after 2.482086728s: waiting for machine to come up
	I1028 10:55:32.188442  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:32.188906  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:32.188932  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:32.188826  141029 retry.go:31] will retry after 2.448291987s: waiting for machine to come up
	I1028 10:55:34.638905  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:34.639359  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:34.639383  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:34.639318  141029 retry.go:31] will retry after 3.063947725s: waiting for machine to come up
	I1028 10:55:37.704581  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:37.704986  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:37.705009  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:37.704960  141029 retry.go:31] will retry after 4.695382005s: waiting for machine to come up
	I1028 10:55:42.403938  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.404433  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has current primary IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.404489  141007 main.go:141] libmachine: (addons-892779) Found IP for machine: 192.168.39.106
	I1028 10:55:42.404515  141007 main.go:141] libmachine: (addons-892779) Reserving static IP address...
	I1028 10:55:42.404895  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find host DHCP lease matching {name: "addons-892779", mac: "52:54:00:7b:e3:76", ip: "192.168.39.106"} in network mk-addons-892779
	I1028 10:55:42.483522  141007 main.go:141] libmachine: (addons-892779) DBG | Getting to WaitForSSH function...
	I1028 10:55:42.483557  141007 main.go:141] libmachine: (addons-892779) Reserved static IP address: 192.168.39.106
	I1028 10:55:42.483581  141007 main.go:141] libmachine: (addons-892779) Waiting for SSH to be available...
	I1028 10:55:42.486681  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.487120  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.487156  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.487327  141007 main.go:141] libmachine: (addons-892779) DBG | Using SSH client type: external
	I1028 10:55:42.487380  141007 main.go:141] libmachine: (addons-892779) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa (-rw-------)
	I1028 10:55:42.487445  141007 main.go:141] libmachine: (addons-892779) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 10:55:42.487468  141007 main.go:141] libmachine: (addons-892779) DBG | About to run SSH command:
	I1028 10:55:42.487487  141007 main.go:141] libmachine: (addons-892779) DBG | exit 0
	I1028 10:55:42.613718  141007 main.go:141] libmachine: (addons-892779) DBG | SSH cmd err, output: <nil>: 
	I1028 10:55:42.614088  141007 main.go:141] libmachine: (addons-892779) KVM machine creation complete!
	I1028 10:55:42.614409  141007 main.go:141] libmachine: (addons-892779) Calling .GetConfigRaw
	I1028 10:55:42.614956  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:42.615147  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:42.615275  141007 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 10:55:42.615291  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:55:42.617042  141007 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 10:55:42.617059  141007 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 10:55:42.617066  141007 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 10:55:42.617072  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.619675  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.620043  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.620068  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.620206  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.620365  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.620525  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.620656  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.620812  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.620997  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.621009  141007 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 10:55:42.725004  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 10:55:42.725034  141007 main.go:141] libmachine: Detecting the provisioner...
	I1028 10:55:42.725050  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.728062  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.728390  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.728418  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.728574  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.728754  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.728927  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.729059  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.729261  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.729426  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.729437  141007 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 10:55:42.834431  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 10:55:42.834550  141007 main.go:141] libmachine: found compatible host: buildroot
	I1028 10:55:42.834569  141007 main.go:141] libmachine: Provisioning with buildroot...
	I1028 10:55:42.834586  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:42.834868  141007 buildroot.go:166] provisioning hostname "addons-892779"
	I1028 10:55:42.834898  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:42.835108  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.837837  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.838192  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.838220  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.838383  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.838569  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.838735  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.838885  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.839040  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.839255  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.839271  141007 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-892779 && echo "addons-892779" | sudo tee /etc/hostname
	I1028 10:55:42.960598  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-892779
	
	I1028 10:55:42.960633  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.963564  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.963989  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.964013  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.964348  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.964496  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.964594  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.964742  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.964898  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.965083  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.965099  141007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-892779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-892779/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-892779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 10:55:43.083462  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 10:55:43.083508  141007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 10:55:43.083533  141007 buildroot.go:174] setting up certificates
	I1028 10:55:43.083546  141007 provision.go:84] configureAuth start
	I1028 10:55:43.083556  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:43.083837  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:43.086572  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.086936  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.086963  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.087160  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.089377  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.089767  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.089796  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.089926  141007 provision.go:143] copyHostCerts
	I1028 10:55:43.089999  141007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 10:55:43.090157  141007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 10:55:43.090213  141007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 10:55:43.090259  141007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.addons-892779 san=[127.0.0.1 192.168.39.106 addons-892779 localhost minikube]
	I1028 10:55:43.228217  141007 provision.go:177] copyRemoteCerts
	I1028 10:55:43.228273  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 10:55:43.228295  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.231198  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.231519  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.231548  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.231749  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.231935  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.232061  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.232177  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.316364  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 10:55:43.342498  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 10:55:43.368125  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 10:55:43.393360  141007 provision.go:87] duration metric: took 309.798537ms to configureAuth
	I1028 10:55:43.393391  141007 buildroot.go:189] setting minikube options for container-runtime
	I1028 10:55:43.393582  141007 config.go:182] Loaded profile config "addons-892779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:55:43.393662  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.396695  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.397055  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.397091  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.397266  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.397482  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.397677  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.397848  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.397992  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:43.398151  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:43.398165  141007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 10:55:43.627324  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 10:55:43.627356  141007 main.go:141] libmachine: Checking connection to Docker...
	I1028 10:55:43.627365  141007 main.go:141] libmachine: (addons-892779) Calling .GetURL
	I1028 10:55:43.628845  141007 main.go:141] libmachine: (addons-892779) DBG | Using libvirt version 6000000
	I1028 10:55:43.631450  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.631854  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.631890  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.632102  141007 main.go:141] libmachine: Docker is up and running!
	I1028 10:55:43.632118  141007 main.go:141] libmachine: Reticulating splines...
	I1028 10:55:43.632128  141007 client.go:171] duration metric: took 24.201788801s to LocalClient.Create
	I1028 10:55:43.632154  141007 start.go:167] duration metric: took 24.201861716s to libmachine.API.Create "addons-892779"
	I1028 10:55:43.632177  141007 start.go:293] postStartSetup for "addons-892779" (driver="kvm2")
	I1028 10:55:43.632193  141007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 10:55:43.632220  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.632473  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 10:55:43.632498  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.634808  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.635189  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.635205  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.635418  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.635638  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.635783  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.635912  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.720544  141007 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 10:55:43.724808  141007 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 10:55:43.724850  141007 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 10:55:43.724951  141007 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 10:55:43.724986  141007 start.go:296] duration metric: took 92.799351ms for postStartSetup
	I1028 10:55:43.725021  141007 main.go:141] libmachine: (addons-892779) Calling .GetConfigRaw
	I1028 10:55:43.725608  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:43.728028  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.728385  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.728414  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.728607  141007 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/config.json ...
	I1028 10:55:43.728821  141007 start.go:128] duration metric: took 24.318415865s to createHost
	I1028 10:55:43.728851  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.731173  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.731546  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.731580  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.731726  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.731914  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.732155  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.732326  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.732518  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:43.732682  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:43.732693  141007 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 10:55:43.838396  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730112943.809346361
	
	I1028 10:55:43.838424  141007 fix.go:216] guest clock: 1730112943.809346361
	I1028 10:55:43.838433  141007 fix.go:229] Guest: 2024-10-28 10:55:43.809346361 +0000 UTC Remote: 2024-10-28 10:55:43.72883726 +0000 UTC m=+24.427622117 (delta=80.509101ms)
	I1028 10:55:43.838484  141007 fix.go:200] guest clock delta is within tolerance: 80.509101ms
	I1028 10:55:43.838492  141007 start.go:83] releasing machines lock for "addons-892779", held for 24.428166535s
	I1028 10:55:43.838521  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.838786  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:43.841838  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.842278  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.842307  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.842464  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.843023  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.843196  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.843284  141007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 10:55:43.843346  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.843387  141007 ssh_runner.go:195] Run: cat /version.json
	I1028 10:55:43.843416  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.846296  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846325  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846651  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.846681  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846715  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.846731  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846839  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.846931  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.847024  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.847091  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.847162  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.847220  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.847286  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.847318  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.951281  141007 ssh_runner.go:195] Run: systemctl --version
	I1028 10:55:43.958027  141007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 10:55:44.121177  141007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 10:55:44.128479  141007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 10:55:44.128560  141007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 10:55:44.147474  141007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 10:55:44.147502  141007 start.go:495] detecting cgroup driver to use...
	I1028 10:55:44.147570  141007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 10:55:44.164142  141007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 10:55:44.179618  141007 docker.go:217] disabling cri-docker service (if available) ...
	I1028 10:55:44.179681  141007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 10:55:44.194807  141007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 10:55:44.209829  141007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 10:55:44.322617  141007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 10:55:44.457091  141007 docker.go:233] disabling docker service ...
	I1028 10:55:44.457169  141007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 10:55:44.472608  141007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 10:55:44.486472  141007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 10:55:44.620106  141007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 10:55:44.748714  141007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 10:55:44.763436  141007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 10:55:44.782711  141007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 10:55:44.782768  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.793825  141007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 10:55:44.793892  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.805075  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.816243  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.827490  141007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 10:55:44.839005  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.850290  141007 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.868242  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.879209  141007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 10:55:44.888944  141007 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 10:55:44.889002  141007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 10:55:44.908562  141007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 10:55:44.922885  141007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:55:45.031729  141007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 10:55:45.128849  141007 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 10:55:45.128941  141007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 10:55:45.134025  141007 start.go:563] Will wait 60s for crictl version
	I1028 10:55:45.134102  141007 ssh_runner.go:195] Run: which crictl
	I1028 10:55:45.138032  141007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 10:55:45.181652  141007 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 10:55:45.181770  141007 ssh_runner.go:195] Run: crio --version
	I1028 10:55:45.211427  141007 ssh_runner.go:195] Run: crio --version
	I1028 10:55:45.242954  141007 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 10:55:45.244330  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:45.247038  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:45.247361  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:45.247387  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:45.247584  141007 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 10:55:45.252064  141007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:55:45.265334  141007 kubeadm.go:883] updating cluster {Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 10:55:45.265447  141007 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:55:45.265494  141007 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:55:45.303366  141007 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 10:55:45.303436  141007 ssh_runner.go:195] Run: which lz4
	I1028 10:55:45.308074  141007 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 10:55:45.312561  141007 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 10:55:45.312596  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 10:55:46.767713  141007 crio.go:462] duration metric: took 1.45968553s to copy over tarball
	I1028 10:55:46.767797  141007 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 10:55:49.064182  141007 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.296357601s)
	I1028 10:55:49.064216  141007 crio.go:469] duration metric: took 2.296466387s to extract the tarball
	I1028 10:55:49.064224  141007 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 10:55:49.105000  141007 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:55:49.156410  141007 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 10:55:49.156437  141007 cache_images.go:84] Images are preloaded, skipping loading
	I1028 10:55:49.156445  141007 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.2 crio true true} ...
	I1028 10:55:49.156547  141007 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-892779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 10:55:49.156610  141007 ssh_runner.go:195] Run: crio config
	I1028 10:55:49.215773  141007 cni.go:84] Creating CNI manager for ""
	I1028 10:55:49.215799  141007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:55:49.215810  141007 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 10:55:49.215832  141007 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-892779 NodeName:addons-892779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 10:55:49.215947  141007 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-892779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 10:55:49.216005  141007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 10:55:49.226945  141007 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 10:55:49.227007  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 10:55:49.238852  141007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 10:55:49.258237  141007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 10:55:49.276761  141007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1028 10:55:49.294732  141007 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1028 10:55:49.298946  141007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:55:49.312232  141007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:55:49.447632  141007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 10:55:49.466005  141007 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779 for IP: 192.168.39.106
	I1028 10:55:49.466038  141007 certs.go:194] generating shared ca certs ...
	I1028 10:55:49.466057  141007 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.466212  141007 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 10:55:49.603469  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt ...
	I1028 10:55:49.603501  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt: {Name:mk054550a0fe354b3c02d1432ba9351dced683bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.603696  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key ...
	I1028 10:55:49.603711  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key: {Name:mk4b7477e3761da1d78e3e4f1c6e0daa874a67de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.603812  141007 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 10:55:49.698175  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt ...
	I1028 10:55:49.698209  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt: {Name:mk7e92ecf4d6400b107409be7619010de2dda2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.698404  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key ...
	I1028 10:55:49.698421  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key: {Name:mk44c6e5638cfda241a2bee5cb00c19511e2a30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.698523  141007 certs.go:256] generating profile certs ...
	I1028 10:55:49.698597  141007 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.key
	I1028 10:55:49.698616  141007 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt with IP's: []
	I1028 10:55:49.750900  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt ...
	I1028 10:55:49.750935  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: {Name:mkdd92ed1d1be6dff715d84b590f28bd5d2a2d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.751140  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.key ...
	I1028 10:55:49.751158  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.key: {Name:mkd3bde6b0f0846cbc5a6d4d432825ecb16c07bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.751294  141007 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d
	I1028 10:55:49.751320  141007 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.106]
	I1028 10:55:50.104817  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d ...
	I1028 10:55:50.104853  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d: {Name:mk295c55c16fbdb7a6141ddaa94a647e76e2e0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.105053  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d ...
	I1028 10:55:50.105072  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d: {Name:mkc520591f57fd9b7ad5872b707ae9ee59a38bcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.105175  141007 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt
	I1028 10:55:50.105273  141007 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key
	I1028 10:55:50.105347  141007 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key
	I1028 10:55:50.105375  141007 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt with IP's: []
	I1028 10:55:50.208201  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt ...
	I1028 10:55:50.208236  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt: {Name:mkc2e85fe6e63b2edfeaa492eb26b69df346de19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.208431  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key ...
	I1028 10:55:50.208451  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key: {Name:mk1318316227f112d5da9f267b5e8c039e4f2824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.208688  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 10:55:50.208735  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 10:55:50.208766  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 10:55:50.208801  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 10:55:50.209454  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 10:55:50.239104  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 10:55:50.272308  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 10:55:50.299244  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 10:55:50.326103  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 10:55:50.352994  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 10:55:50.379724  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 10:55:50.406935  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 10:55:50.433319  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 10:55:50.459198  141007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 10:55:50.478007  141007 ssh_runner.go:195] Run: openssl version
	I1028 10:55:50.484628  141007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 10:55:50.496702  141007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:55:50.502386  141007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:55:50.502460  141007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:55:50.508903  141007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 10:55:50.520803  141007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 10:55:50.525462  141007 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 10:55:50.525519  141007 kubeadm.go:392] StartCluster: {Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:55:50.525627  141007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 10:55:50.525715  141007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 10:55:50.569043  141007 cri.go:89] found id: ""
	I1028 10:55:50.569120  141007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 10:55:50.579506  141007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 10:55:50.589812  141007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 10:55:50.599728  141007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 10:55:50.599750  141007 kubeadm.go:157] found existing configuration files:
	
	I1028 10:55:50.599793  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 10:55:50.609750  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 10:55:50.609839  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 10:55:50.620345  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 10:55:50.630257  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 10:55:50.630319  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 10:55:50.640199  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 10:55:50.649294  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 10:55:50.649367  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 10:55:50.659991  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 10:55:50.669502  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 10:55:50.669581  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 10:55:50.680184  141007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 10:55:50.870066  141007 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 10:56:00.524626  141007 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 10:56:00.524738  141007 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 10:56:00.524847  141007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 10:56:00.525002  141007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 10:56:00.525131  141007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 10:56:00.525219  141007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 10:56:00.526778  141007 out.go:235]   - Generating certificates and keys ...
	I1028 10:56:00.526873  141007 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 10:56:00.526963  141007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 10:56:00.527049  141007 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 10:56:00.527114  141007 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 10:56:00.527189  141007 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 10:56:00.527276  141007 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 10:56:00.527349  141007 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 10:56:00.527507  141007 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-892779 localhost] and IPs [192.168.39.106 127.0.0.1 ::1]
	I1028 10:56:00.527593  141007 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 10:56:00.527741  141007 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-892779 localhost] and IPs [192.168.39.106 127.0.0.1 ::1]
	I1028 10:56:00.527843  141007 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 10:56:00.528019  141007 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 10:56:00.528087  141007 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 10:56:00.528158  141007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 10:56:00.528239  141007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 10:56:00.528321  141007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 10:56:00.528415  141007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 10:56:00.528502  141007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 10:56:00.528553  141007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 10:56:00.528620  141007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 10:56:00.528688  141007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 10:56:00.530247  141007 out.go:235]   - Booting up control plane ...
	I1028 10:56:00.530345  141007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 10:56:00.530419  141007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 10:56:00.530481  141007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 10:56:00.530579  141007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 10:56:00.530673  141007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 10:56:00.530712  141007 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 10:56:00.530822  141007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 10:56:00.530924  141007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 10:56:00.530974  141007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092577ms
	I1028 10:56:00.531050  141007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 10:56:00.531099  141007 kubeadm.go:310] [api-check] The API server is healthy after 5.502361567s
	I1028 10:56:00.531190  141007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 10:56:00.531299  141007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 10:56:00.531356  141007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 10:56:00.531559  141007 kubeadm.go:310] [mark-control-plane] Marking the node addons-892779 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 10:56:00.531626  141007 kubeadm.go:310] [bootstrap-token] Using token: h4n5ke.6v6qoasogb607car
	I1028 10:56:00.533320  141007 out.go:235]   - Configuring RBAC rules ...
	I1028 10:56:00.533455  141007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 10:56:00.533581  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 10:56:00.533773  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 10:56:00.533896  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 10:56:00.534001  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 10:56:00.534078  141007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 10:56:00.534176  141007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 10:56:00.534215  141007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 10:56:00.534262  141007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 10:56:00.534268  141007 kubeadm.go:310] 
	I1028 10:56:00.534317  141007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 10:56:00.534326  141007 kubeadm.go:310] 
	I1028 10:56:00.534399  141007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 10:56:00.534405  141007 kubeadm.go:310] 
	I1028 10:56:00.534425  141007 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 10:56:00.534511  141007 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 10:56:00.534595  141007 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 10:56:00.534610  141007 kubeadm.go:310] 
	I1028 10:56:00.534689  141007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 10:56:00.534698  141007 kubeadm.go:310] 
	I1028 10:56:00.534765  141007 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 10:56:00.534774  141007 kubeadm.go:310] 
	I1028 10:56:00.534850  141007 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 10:56:00.534955  141007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 10:56:00.535056  141007 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 10:56:00.535062  141007 kubeadm.go:310] 
	I1028 10:56:00.535133  141007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 10:56:00.535205  141007 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 10:56:00.535211  141007 kubeadm.go:310] 
	I1028 10:56:00.535289  141007 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h4n5ke.6v6qoasogb607car \
	I1028 10:56:00.535378  141007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 10:56:00.535398  141007 kubeadm.go:310] 	--control-plane 
	I1028 10:56:00.535403  141007 kubeadm.go:310] 
	I1028 10:56:00.535471  141007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 10:56:00.535477  141007 kubeadm.go:310] 
	I1028 10:56:00.535555  141007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h4n5ke.6v6qoasogb607car \
	I1028 10:56:00.535670  141007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 10:56:00.535686  141007 cni.go:84] Creating CNI manager for ""
	I1028 10:56:00.535696  141007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:56:00.537466  141007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 10:56:00.539046  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 10:56:00.550808  141007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 10:56:00.578183  141007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 10:56:00.578338  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-892779 minikube.k8s.io/updated_at=2024_10_28T10_56_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=addons-892779 minikube.k8s.io/primary=true
	I1028 10:56:00.578344  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:00.592539  141007 ops.go:34] apiserver oom_adj: -16
	I1028 10:56:00.751656  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:01.251749  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:01.751868  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:02.252743  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:02.752103  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:03.252487  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:03.751837  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:04.251771  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:04.751987  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:04.898615  141007 kubeadm.go:1113] duration metric: took 4.320420995s to wait for elevateKubeSystemPrivileges
	I1028 10:56:04.898659  141007 kubeadm.go:394] duration metric: took 14.373143469s to StartCluster
	I1028 10:56:04.898682  141007 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:56:04.898813  141007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 10:56:04.899156  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:56:04.899386  141007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 10:56:04.899404  141007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 10:56:04.899472  141007 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 10:56:04.899607  141007 addons.go:69] Setting yakd=true in profile "addons-892779"
	I1028 10:56:04.899628  141007 addons.go:69] Setting default-storageclass=true in profile "addons-892779"
	I1028 10:56:04.899624  141007 addons.go:69] Setting inspektor-gadget=true in profile "addons-892779"
	I1028 10:56:04.899642  141007 addons.go:69] Setting metrics-server=true in profile "addons-892779"
	I1028 10:56:04.899648  141007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-892779"
	I1028 10:56:04.899652  141007 addons.go:234] Setting addon inspektor-gadget=true in "addons-892779"
	I1028 10:56:04.899659  141007 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-892779"
	I1028 10:56:04.899672  141007 addons.go:69] Setting gcp-auth=true in profile "addons-892779"
	I1028 10:56:04.899685  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899689  141007 addons.go:69] Setting volcano=true in profile "addons-892779"
	I1028 10:56:04.899696  141007 mustload.go:65] Loading cluster: addons-892779
	I1028 10:56:04.899699  141007 addons.go:234] Setting addon volcano=true in "addons-892779"
	I1028 10:56:04.899700  141007 config.go:182] Loaded profile config "addons-892779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:56:04.899697  141007 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-892779"
	I1028 10:56:04.899747  141007 addons.go:69] Setting storage-provisioner=true in profile "addons-892779"
	I1028 10:56:04.899779  141007 addons.go:69] Setting ingress-dns=true in profile "addons-892779"
	I1028 10:56:04.899799  141007 addons.go:234] Setting addon ingress-dns=true in "addons-892779"
	I1028 10:56:04.899652  141007 addons.go:234] Setting addon metrics-server=true in "addons-892779"
	I1028 10:56:04.899825  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899854  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899897  141007 config.go:182] Loaded profile config "addons-892779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:56:04.900161  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900171  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900200  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900205  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900268  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900291  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900314  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.899730  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.900345  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900357  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900322  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899785  141007 addons.go:234] Setting addon storage-provisioner=true in "addons-892779"
	I1028 10:56:04.900623  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.900674  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900703  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899652  141007 addons.go:69] Setting ingress=true in profile "addons-892779"
	I1028 10:56:04.900845  141007 addons.go:234] Setting addon ingress=true in "addons-892779"
	I1028 10:56:04.900883  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899752  141007 addons.go:69] Setting cloud-spanner=true in profile "addons-892779"
	I1028 10:56:04.900994  141007 addons.go:234] Setting addon cloud-spanner=true in "addons-892779"
	I1028 10:56:04.901003  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.901021  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.901032  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899738  141007 addons.go:69] Setting volumesnapshots=true in profile "addons-892779"
	I1028 10:56:04.901237  141007 addons.go:234] Setting addon volumesnapshots=true in "addons-892779"
	I1028 10:56:04.901264  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.901281  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.901297  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899678  141007 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-892779"
	I1028 10:56:04.901377  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.901421  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.901455  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899760  141007 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-892779"
	I1028 10:56:04.902263  141007 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-892779"
	I1028 10:56:04.902295  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899763  141007 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-892779"
	I1028 10:56:04.902691  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.902731  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899770  141007 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-892779"
	I1028 10:56:04.903098  141007 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-892779"
	I1028 10:56:04.903134  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.903165  141007 out.go:177] * Verifying Kubernetes components...
	I1028 10:56:04.899633  141007 addons.go:234] Setting addon yakd=true in "addons-892779"
	I1028 10:56:04.903320  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.903500  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.903529  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.903657  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.903681  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899742  141007 addons.go:69] Setting registry=true in profile "addons-892779"
	I1028 10:56:04.903851  141007 addons.go:234] Setting addon registry=true in "addons-892779"
	I1028 10:56:04.903889  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.904939  141007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:56:04.922302  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37649
	I1028 10:56:04.922998  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.923147  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I1028 10:56:04.924156  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924196  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.924422  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924462  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1028 10:56:04.924476  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.924543  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924564  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.924575  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924613  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.925012  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I1028 10:56:04.925190  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I1028 10:56:04.925306  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.925496  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.925509  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.925852  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.925919  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.926014  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.926021  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.926076  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.926599  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.926618  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.926673  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.926716  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.926825  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.926836  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.927240  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.927274  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.927614  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I1028 10:56:04.927778  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.928129  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.928150  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.930075  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.930118  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.939058  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.939126  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.939150  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.939216  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.939531  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.940395  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.940421  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.940503  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.941376  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.941875  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.941916  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.942317  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.942361  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.942687  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.942731  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.945847  141007 addons.go:234] Setting addon default-storageclass=true in "addons-892779"
	I1028 10:56:04.945899  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.946337  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.946494  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.947715  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I1028 10:56:04.948308  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.948894  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.948911  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.949325  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.949762  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.949787  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.956931  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I1028 10:56:04.957615  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.958375  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.958395  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.958838  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.959612  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.959786  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.962949  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I1028 10:56:04.962990  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I1028 10:56:04.963399  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.963501  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.963904  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.963924  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.964080  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.964092  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.964503  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.964519  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.964565  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I1028 10:56:04.965132  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.965174  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.965693  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.966244  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.966266  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.966650  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.970721  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I1028 10:56:04.971965  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I1028 10:56:04.976189  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1028 10:56:04.976210  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I1028 10:56:04.976879  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1028 10:56:04.977251  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1028 10:56:04.977722  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.978269  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.978292  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.978734  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.978912  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.982293  141007 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-892779"
	I1028 10:56:04.982351  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.982715  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.982753  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.982763  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.982790  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.983293  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.983337  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.983868  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I1028 10:56:04.983969  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984024  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I1028 10:56:04.984158  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984260  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984443  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984641  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.984662  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.984763  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I1028 10:56:04.984818  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.984840  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.984910  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984981  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.984992  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.985453  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.985537  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.985549  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.985565  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.985908  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.985923  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.986084  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.986094  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.986143  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.986321  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.986333  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.986390  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.987121  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.987158  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.987376  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.987397  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.987465  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.987509  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.987708  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.987773  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.989234  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.989574  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.989595  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.989983  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.990129  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.990142  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.990202  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.990608  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.990817  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.991736  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.992116  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.992281  141007 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 10:56:04.992589  141007 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 10:56:04.992967  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.993216  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.993859  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:04.993876  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:04.993990  141007 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 10:56:04.994007  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 10:56:04.994026  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:04.994693  141007 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 10:56:04.994708  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 10:56:04.994726  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:04.994862  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:04.994903  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:04.994910  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:04.994917  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:04.994923  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:04.997313  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.997894  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.999170  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.000904  141007 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 10:56:05.001132  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.001340  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.002088  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.002131  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.002159  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.002171  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.002369  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.002436  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.002525  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:05.002538  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:05.002586  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	W1028 10:56:05.002641  141007 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 10:56:05.002727  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.002855  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.003158  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 10:56:05.003171  141007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 10:56:05.003189  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.003250  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.004432  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.004608  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.007666  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.008275  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.008304  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.008538  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.008768  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.008940  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.009076  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.018365  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1028 10:56:05.018775  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.019551  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.019577  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.019957  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.020037  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I1028 10:56:05.020232  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I1028 10:56:05.020336  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.020830  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.022003  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.023087  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.023643  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.023664  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.024107  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.024126  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.024542  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.024593  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.025149  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:05.025192  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.025485  141007 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 10:56:05.025674  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.027228  141007 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 10:56:05.027256  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 10:56:05.027281  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.028535  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.030376  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 10:56:05.031198  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I1028 10:56:05.031221  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.032838  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 10:56:05.033366  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34805
	I1028 10:56:05.033840  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.033950  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.033978  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.034163  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.034330  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.034476  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.034534  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1028 10:56:05.034818  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.035001  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.035017  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.035088  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.035163  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.035735  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 10:56:05.036032  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.036061  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.036220  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I1028 10:56:05.036359  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.036502  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.037065  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.037217  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.037521  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I1028 10:56:05.038158  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.038179  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.038537  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 10:56:05.038595  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.038800  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.039007  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.039067  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.039681  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.039906  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.040517  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.040538  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.040845  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.040863  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.041515  141007 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 10:56:05.041547  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.041521  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.041626  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 10:56:05.041695  141007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 10:56:05.041787  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.042428  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1028 10:56:05.042738  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.043206  141007 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 10:56:05.043235  141007 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 10:56:05.043239  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.043787  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.043807  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.043308  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.043356  141007 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 10:56:05.043895  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 10:56:05.043910  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.043978  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.044150  141007 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 10:56:05.044591  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.044347  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:05.044664  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.044894  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 10:56:05.045484  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:05.045558  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.046228  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I1028 10:56:05.047280  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.047307  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 10:56:05.047284  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I1028 10:56:05.047452  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 10:56:05.047465  141007 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 10:56:05.047483  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.047830  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.048258  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.048281  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.048343  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.048588  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 10:56:05.048918  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.048963  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.049113  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.049126  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.049187  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.049297  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.049316  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.049466  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.049487  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.049661  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.049724  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.050127  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.050144  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.050317  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.050479  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.050656  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.050854  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.051148  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.051327  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.052171  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.052236  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.052575  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.052857  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.052885  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.053046  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.053186  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 10:56:05.053236  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.053946  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.054074  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.054297  141007 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 10:56:05.054301  141007 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 10:56:05.055882  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1028 10:56:05.056225  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.056407  141007 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 10:56:05.056429  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 10:56:05.056446  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.056544  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 10:56:05.056653  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.056669  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.056938  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.057183  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.058117  141007 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 10:56:05.058141  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 10:56:05.058498  141007 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 10:56:05.058515  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 10:56:05.058533  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.059428  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.060118  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 10:56:05.060138  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 10:56:05.060161  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.060174  141007 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 10:56:05.060187  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 10:56:05.060204  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.060354  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.060844  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.060884  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.061121  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.061350  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.061508  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.061746  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.062049  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.062080  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 10:56:05.062549  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.062568  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.062726  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.062940  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.063106  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.063262  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.063511  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 10:56:05.063531  141007 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 10:56:05.063548  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.064639  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.065052  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.065074  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.065387  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.065653  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.065812  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.065915  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.066895  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.067349  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.067367  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.067560  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.067774  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.067835  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.068062  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.068202  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.068226  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.068259  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.068532  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.068705  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.068841  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.068964  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	W1028 10:56:05.069717  141007 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39100->192.168.39.106:22: read: connection reset by peer
	I1028 10:56:05.069746  141007 retry.go:31] will retry after 245.097269ms: ssh: handshake failed: read tcp 192.168.39.1:39100->192.168.39.106:22: read: connection reset by peer
	I1028 10:56:05.073068  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I1028 10:56:05.073448  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.073890  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.073905  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.074180  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.074338  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.075816  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.077862  141007 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 10:56:05.079584  141007 out.go:177]   - Using image docker.io/busybox:stable
	I1028 10:56:05.081127  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I1028 10:56:05.081297  141007 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 10:56:05.081317  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 10:56:05.081335  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.081704  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.082762  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.082786  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.083270  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.083786  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.084683  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.085087  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.085121  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.085240  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.085409  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.085552  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.085597  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.085721  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.085732  141007 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 10:56:05.085961  141007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 10:56:05.085983  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.089056  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.089433  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.089451  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.089671  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.089856  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.090056  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.090195  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.494484  141007 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 10:56:05.494514  141007 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 10:56:05.588140  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 10:56:05.607572  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 10:56:05.607598  141007 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 10:56:05.626542  141007 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 10:56:05.626572  141007 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 10:56:05.631247  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 10:56:05.633946  141007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 10:56:05.634001  141007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 10:56:05.638847  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 10:56:05.657282  141007 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 10:56:05.657311  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 10:56:05.662039  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 10:56:05.669195  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 10:56:05.669226  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 10:56:05.680679  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 10:56:05.682878  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 10:56:05.704388  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 10:56:05.746944  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 10:56:05.747167  141007 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 10:56:05.747189  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 10:56:05.798147  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 10:56:05.798178  141007 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 10:56:05.908680  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 10:56:05.908705  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 10:56:05.916166  141007 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 10:56:05.916191  141007 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 10:56:05.944477  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 10:56:05.944509  141007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 10:56:05.953661  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 10:56:06.056040  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 10:56:06.073011  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 10:56:06.073045  141007 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 10:56:06.148319  141007 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 10:56:06.148347  141007 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 10:56:06.168180  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 10:56:06.168210  141007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 10:56:06.266092  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 10:56:06.266123  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 10:56:06.421030  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 10:56:06.421063  141007 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 10:56:06.453749  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 10:56:06.453783  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 10:56:06.534935  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 10:56:06.558395  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 10:56:06.558424  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 10:56:06.755468  141007 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 10:56:06.755493  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 10:56:06.762827  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 10:56:06.846776  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 10:56:06.846818  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 10:56:07.196882  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 10:56:07.275362  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 10:56:07.275397  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 10:56:07.340954  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.75277225s)
	I1028 10:56:07.341008  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:07.341020  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:07.341361  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:07.341385  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:07.341397  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:07.341407  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:07.341665  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:07.341728  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:07.341687  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:07.350705  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:07.350727  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:07.350986  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:07.351007  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:07.637863  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 10:56:07.637892  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 10:56:07.974684  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 10:56:07.974723  141007 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 10:56:08.213334  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 10:56:08.213369  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 10:56:08.390234  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 10:56:08.390272  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 10:56:08.743637  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 10:56:08.743670  141007 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 10:56:09.213168  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 10:56:09.955562  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.324274381s)
	I1028 10:56:09.955569  141007 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.32158678s)
	I1028 10:56:09.955631  141007 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.321602155s)
	I1028 10:56:09.955678  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.316806661s)
	I1028 10:56:09.955744  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.275036169s)
	I1028 10:56:09.955683  141007 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 10:56:09.955769  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.955779  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.955748  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.955903  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.955642  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.955965  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.955717  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.293647969s)
	I1028 10:56:09.956219  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.956230  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.956891  141007 node_ready.go:35] waiting up to 6m0s for node "addons-892779" to be "Ready" ...
	I1028 10:56:09.957101  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957122  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957120  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957136  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957144  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957147  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957151  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957154  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957161  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957160  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957167  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957192  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957200  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957208  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957215  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957253  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957275  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957281  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957289  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957295  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957511  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957545  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957575  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957580  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957606  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957614  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.958275  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.958287  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.958528  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.958538  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.987611  141007 node_ready.go:49] node "addons-892779" has status "Ready":"True"
	I1028 10:56:09.987645  141007 node_ready.go:38] duration metric: took 30.731126ms for node "addons-892779" to be "Ready" ...
	I1028 10:56:09.987658  141007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 10:56:10.077186  141007 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:10.511739  141007 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-892779" context rescaled to 1 replicas
	I1028 10:56:12.038921  141007 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 10:56:12.038971  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:12.042346  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.042916  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:12.042953  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.043143  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:12.043382  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:12.043581  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:12.043746  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:12.105418  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:12.619990  141007 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 10:56:12.782618  141007 addons.go:234] Setting addon gcp-auth=true in "addons-892779"
	I1028 10:56:12.782689  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:12.783381  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:12.783461  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:12.799289  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I1028 10:56:12.799834  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:12.800329  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:12.800351  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:12.800705  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:12.801294  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:12.801343  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:12.817122  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1028 10:56:12.817645  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:12.818194  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:12.818219  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:12.818568  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:12.818760  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:12.820463  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:12.820708  141007 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 10:56:12.820739  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:12.823309  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.823655  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:12.823684  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.823825  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:12.824004  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:12.824159  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:12.824275  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:14.142461  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:14.182020  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.49910956s)
	I1028 10:56:14.182067  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182076  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182173  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.477748281s)
	I1028 10:56:14.182224  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182235  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182336  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.435360522s)
	I1028 10:56:14.182343  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.182367  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182378  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182391  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182415  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.182425  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182433  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182493  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.228783662s)
	I1028 10:56:14.182529  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182541  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182549  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.126472728s)
	I1028 10:56:14.182580  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182590  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182640  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182659  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.182676  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.182686  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182693  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182733  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.647738425s)
	I1028 10:56:14.182800  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182832  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182834  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.419954937s)
	I1028 10:56:14.182862  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182875  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182904  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182925  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182972  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.182980  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.182988  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182997  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.183015  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.986100551s)
	W1028 10:56:14.183041  141007 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 10:56:14.183087  141007 retry.go:31] will retry after 360.732586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 10:56:14.183111  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183121  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183131  141007 addons.go:475] Verifying addon ingress=true in "addons-892779"
	I1028 10:56:14.183132  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.183159  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183166  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183174  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.183181  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.183268  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183284  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183328  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183338  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183346  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.183352  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.184727  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.184755  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.184762  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.184770  141007 addons.go:475] Verifying addon metrics-server=true in "addons-892779"
	I1028 10:56:14.185005  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.185025  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.185047  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.185054  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.185230  141007 out.go:177] * Verifying ingress addon...
	I1028 10:56:14.185646  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.185679  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.185685  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.185693  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.185700  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.183244  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186376  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186404  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186410  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.186477  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186490  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.186500  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.186508  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.186616  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186640  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186646  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.186656  141007 addons.go:475] Verifying addon registry=true in "addons-892779"
	I1028 10:56:14.186926  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186957  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186965  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.187844  141007 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 10:56:14.188578  141007 out.go:177] * Verifying registry addon...
	I1028 10:56:14.188584  141007 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-892779 service yakd-dashboard -n yakd-dashboard
	
	I1028 10:56:14.190636  141007 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 10:56:14.194468  141007 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 10:56:14.194489  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:14.200974  141007 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 10:56:14.201005  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:14.241040  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.241064  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.241338  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.241358  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.545072  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 10:56:14.946180  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:14.948674  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:15.205785  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:15.207435  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:15.714816  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:15.731428  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:15.766546  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.553325605s)
	I1028 10:56:15.766617  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:15.766634  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:15.766636  141007 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.94589861s)
	I1028 10:56:15.766912  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:15.766980  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:15.766998  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:15.767011  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:15.767285  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:15.767345  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:15.767360  141007 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-892779"
	I1028 10:56:15.767321  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:15.768355  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 10:56:15.769147  141007 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 10:56:15.770739  141007 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 10:56:15.771806  141007 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 10:56:15.772097  141007 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 10:56:15.772115  141007 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 10:56:15.798939  141007 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 10:56:15.798964  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:15.839144  141007 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 10:56:15.839179  141007 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 10:56:15.960754  141007 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 10:56:15.960783  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 10:56:16.064525  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.519337351s)
	I1028 10:56:16.064606  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:16.064634  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:16.064947  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:16.064968  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:16.064978  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:16.064986  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:16.065195  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:16.065284  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:16.065263  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:16.067863  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 10:56:16.196466  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:16.196656  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:16.276748  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:16.584459  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:16.694343  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:16.699045  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:16.778772  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:17.217312  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:17.217778  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:17.307497  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:17.571961  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.50404897s)
	I1028 10:56:17.572026  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:17.572044  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:17.572336  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:17.572356  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:17.572370  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:17.572378  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:17.572642  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:17.572662  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:17.574778  141007 addons.go:475] Verifying addon gcp-auth=true in "addons-892779"
	I1028 10:56:17.577220  141007 out.go:177] * Verifying gcp-auth addon...
	I1028 10:56:17.579814  141007 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 10:56:17.600397  141007 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 10:56:17.600429  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:17.700219  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:17.700512  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:17.800484  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:18.084251  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:18.192461  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:18.195205  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:18.280830  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:18.583585  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:18.586050  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:18.692420  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:18.694731  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:18.793852  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:19.087152  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:19.194577  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:19.196234  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:19.295800  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:19.585820  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:19.695247  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:19.695727  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:19.776939  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:20.084211  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:20.194258  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:20.195383  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:20.276273  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:20.584151  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:20.692189  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:20.693890  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:20.776464  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:21.083296  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:21.084491  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:21.192431  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:21.194165  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:21.277510  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:21.584798  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:21.692779  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:21.693981  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:21.777321  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:22.084734  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:22.192648  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:22.194619  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:22.277403  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:22.585931  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:22.694817  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:22.695158  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:22.987048  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:23.085848  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:23.087644  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:23.192827  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:23.197616  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:23.277254  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:23.583776  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:23.692563  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:23.694512  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:23.776915  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:24.084116  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:24.198793  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:24.199054  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:24.280040  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:24.583864  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:24.694913  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:24.698544  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:24.800076  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:25.084118  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:25.193087  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:25.194449  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:25.276823  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:25.583075  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:25.584246  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:25.695202  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:25.701115  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:25.777858  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:26.085518  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:26.195010  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:26.196129  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:26.278214  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:26.584334  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:26.849514  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:26.849795  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:26.850722  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:27.086474  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:27.192355  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:27.193853  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:27.277270  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:27.583434  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:27.696758  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:27.696871  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:27.796952  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:28.085430  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:28.087261  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:28.192653  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:28.193974  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:28.276986  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:28.584347  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:28.693883  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:28.695159  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:28.794128  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:29.084019  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:29.191790  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:29.193741  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:29.277218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:29.584303  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:29.695868  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:29.695916  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:29.778666  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:30.085306  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:30.192325  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:30.198135  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:30.278780  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:30.584899  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:30.585193  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:30.694439  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:30.694570  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:30.776646  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:31.083766  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:31.191838  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:31.195724  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:31.276799  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:31.583116  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:31.693850  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:31.694757  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:31.777077  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:32.083692  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:32.192732  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:32.195382  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:32.276546  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:32.584035  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:32.691729  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:32.693423  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:32.776977  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:33.094792  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:33.095746  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:33.194945  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:33.205811  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:33.277752  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:33.584266  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:33.693394  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:33.697236  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:33.778521  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:34.083518  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:34.193118  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:34.194850  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:34.276993  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:34.584405  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:34.693307  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:34.695021  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:34.777077  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:35.084362  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:35.192675  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:35.194794  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:35.276995  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:35.583858  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:35.585294  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:35.693012  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:35.699168  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:35.793469  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:36.083447  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:36.193326  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:36.195797  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:36.277222  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:36.584552  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:36.694974  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:36.695621  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:36.776938  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:37.083052  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:37.192991  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:37.194413  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:37.276828  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:37.583984  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:37.693219  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:37.695035  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:37.777290  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:38.087352  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:38.090529  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:38.193216  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:38.196931  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:38.276910  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:38.583000  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:38.692314  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:38.694353  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:38.907668  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:39.084726  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:39.194776  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:39.198218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:39.277094  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:39.584063  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:39.693417  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:39.694362  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:39.776817  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:40.083008  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:40.196305  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:40.198119  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:40.278007  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:40.923803  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:40.924062  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:40.924676  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:40.924790  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:40.926375  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:41.083295  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:41.194539  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:41.195023  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:41.297095  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:41.584206  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:41.693572  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:41.694828  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:41.778425  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:42.084715  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:42.193316  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:42.194801  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:42.277245  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:42.584752  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:42.692861  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:42.694288  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:42.776389  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:43.083730  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:43.084608  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:43.192971  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:43.194507  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:43.276636  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:43.585326  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:43.694145  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:43.694633  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:43.776499  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:44.085672  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:44.192856  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:44.194921  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:44.277553  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:44.583999  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:44.699369  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:44.700570  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:44.779222  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:45.085291  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:45.086190  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:45.194457  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:45.201722  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:45.276601  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:45.583235  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:45.692913  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:45.694580  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:45.777379  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:46.086914  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:46.194761  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:46.195303  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:46.278052  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:46.585119  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:46.692353  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:46.693860  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:46.776859  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:47.086720  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:47.192759  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:47.194070  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:47.293186  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:47.584079  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:47.584750  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:47.693535  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:47.694556  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:47.777634  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:48.083569  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:48.192759  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:48.195050  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:48.276937  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:48.606347  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:48.702142  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:48.702826  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:48.778544  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:49.083424  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:49.195410  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:49.198119  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:49.294695  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:49.584185  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:49.693138  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:49.695669  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:49.777685  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:50.082982  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:50.084579  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:50.193441  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:50.194970  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:50.277299  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:50.583791  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:50.692518  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:50.694068  141007 kapi.go:107] duration metric: took 36.503429654s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 10:56:50.776957  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:51.083849  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:51.193115  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:51.277769  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:51.584950  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:51.692998  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:51.776168  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:52.084492  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:52.191993  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:52.277086  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:52.585307  141007 pod_ready.go:93] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.585332  141007 pod_ready.go:82] duration metric: took 42.508101058s for pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.585341  141007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6ck8n" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.586217  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:52.590416  141007 pod_ready.go:93] pod "coredns-7c65d6cfc9-6ck8n" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.590436  141007 pod_ready.go:82] duration metric: took 5.088551ms for pod "coredns-7c65d6cfc9-6ck8n" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.590445  141007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.593274  141007 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dpcnr" not found
	I1028 10:56:52.593296  141007 pod_ready.go:82] duration metric: took 2.845518ms for pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace to be "Ready" ...
	E1028 10:56:52.593307  141007 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dpcnr" not found
	I1028 10:56:52.593316  141007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.601714  141007 pod_ready.go:93] pod "etcd-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.601736  141007 pod_ready.go:82] duration metric: took 8.413215ms for pod "etcd-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.601745  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.612134  141007 pod_ready.go:93] pod "kube-apiserver-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.612163  141007 pod_ready.go:82] duration metric: took 10.410128ms for pod "kube-apiserver-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.612175  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.695000  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:52.778368  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:52.780492  141007 pod_ready.go:93] pod "kube-controller-manager-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.780513  141007 pod_ready.go:82] duration metric: took 168.331391ms for pod "kube-controller-manager-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.780525  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgxl7" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.083513  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:53.182558  141007 pod_ready.go:93] pod "kube-proxy-pgxl7" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:53.182589  141007 pod_ready.go:82] duration metric: took 402.056282ms for pod "kube-proxy-pgxl7" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.182603  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.191925  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:53.276800  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:53.583234  141007 pod_ready.go:93] pod "kube-scheduler-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:53.583258  141007 pod_ready.go:82] duration metric: took 400.648114ms for pod "kube-scheduler-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.583269  141007 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n492w" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.584869  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:53.692804  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:53.777187  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:53.982207  141007 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-n492w" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:53.982231  141007 pod_ready.go:82] duration metric: took 398.955646ms for pod "nvidia-device-plugin-daemonset-n492w" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.982246  141007 pod_ready.go:39] duration metric: took 43.994575223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 10:56:53.982267  141007 api_server.go:52] waiting for apiserver process to appear ...
	I1028 10:56:53.982322  141007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 10:56:54.010437  141007 api_server.go:72] duration metric: took 49.110991575s to wait for apiserver process to appear ...
	I1028 10:56:54.010472  141007 api_server.go:88] waiting for apiserver healthz status ...
	I1028 10:56:54.010500  141007 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1028 10:56:54.014836  141007 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1028 10:56:54.015893  141007 api_server.go:141] control plane version: v1.31.2
	I1028 10:56:54.015919  141007 api_server.go:131] duration metric: took 5.439588ms to wait for apiserver health ...
	I1028 10:56:54.015928  141007 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 10:56:54.082985  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:54.187304  141007 system_pods.go:59] 18 kube-system pods found
	I1028 10:56:54.187342  141007 system_pods.go:61] "amd-gpu-device-plugin-77nkc" [9525ccf5-beb0-48e3-9612-30e31a087ca2] Running
	I1028 10:56:54.187350  141007 system_pods.go:61] "coredns-7c65d6cfc9-6ck8n" [22aed405-7302-480a-b873-02aecdc8c874] Running
	I1028 10:56:54.187360  141007 system_pods.go:61] "csi-hostpath-attacher-0" [596078c0-e9e3-4da9-99b7-fcf2ffb9ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 10:56:54.187367  141007 system_pods.go:61] "csi-hostpath-resizer-0" [2fc7fc41-f556-49a3-9922-73e16c67463a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 10:56:54.187378  141007 system_pods.go:61] "csi-hostpathplugin-f6btq" [100f5d1e-1127-4214-85ef-49474a262460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 10:56:54.187452  141007 system_pods.go:61] "etcd-addons-892779" [96816209-d6e4-41ff-a843-abb2caaa92f5] Running
	I1028 10:56:54.187462  141007 system_pods.go:61] "kube-apiserver-addons-892779" [fa0527e2-3605-47fa-8d62-ed7a49ae6a8d] Running
	I1028 10:56:54.187469  141007 system_pods.go:61] "kube-controller-manager-addons-892779" [5e473b38-93df-40f1-a084-586bce117796] Running
	I1028 10:56:54.187478  141007 system_pods.go:61] "kube-ingress-dns-minikube" [acf71611-aacb-4b72-aeb9-595f2d5717c0] Running
	I1028 10:56:54.187491  141007 system_pods.go:61] "kube-proxy-pgxl7" [3c85b65a-0083-48cd-8852-3ea8b3024bf3] Running
	I1028 10:56:54.187500  141007 system_pods.go:61] "kube-scheduler-addons-892779" [402a10fc-e775-4cea-84a4-6fec7e060c00] Running
	I1028 10:56:54.187509  141007 system_pods.go:61] "metrics-server-84c5f94fbc-748cp" [863279c2-0842-48b9-8840-31351b7a7bbc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 10:56:54.187518  141007 system_pods.go:61] "nvidia-device-plugin-daemonset-n492w" [17f0e2c2-6431-4f75-84a5-c4ccbb03c69f] Running
	I1028 10:56:54.187525  141007 system_pods.go:61] "registry-66c9cd494c-rnl5j" [5e520c13-81a2-4ebf-ab10-4fecd61cddd7] Running
	I1028 10:56:54.187534  141007 system_pods.go:61] "registry-proxy-7cjwq" [55548851-badf-40ba-a4b8-18d300af90f3] Running
	I1028 10:56:54.187544  141007 system_pods.go:61] "snapshot-controller-56fcc65765-82xbk" [f1f9cf16-2dec-41b4-9963-e49927080375] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.187556  141007 system_pods.go:61] "snapshot-controller-56fcc65765-mbt5s" [23af40a2-2f3d-4775-8bec-16437d1294f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.187565  141007 system_pods.go:61] "storage-provisioner" [5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a] Running
	I1028 10:56:54.187576  141007 system_pods.go:74] duration metric: took 171.64213ms to wait for pod list to return data ...
	I1028 10:56:54.187586  141007 default_sa.go:34] waiting for default service account to be created ...
	I1028 10:56:54.191290  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:54.277741  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:54.383185  141007 default_sa.go:45] found service account: "default"
	I1028 10:56:54.383211  141007 default_sa.go:55] duration metric: took 195.618354ms for default service account to be created ...
	I1028 10:56:54.383220  141007 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 10:56:54.589601  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:54.593134  141007 system_pods.go:86] 18 kube-system pods found
	I1028 10:56:54.593172  141007 system_pods.go:89] "amd-gpu-device-plugin-77nkc" [9525ccf5-beb0-48e3-9612-30e31a087ca2] Running
	I1028 10:56:54.593182  141007 system_pods.go:89] "coredns-7c65d6cfc9-6ck8n" [22aed405-7302-480a-b873-02aecdc8c874] Running
	I1028 10:56:54.593191  141007 system_pods.go:89] "csi-hostpath-attacher-0" [596078c0-e9e3-4da9-99b7-fcf2ffb9ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 10:56:54.593200  141007 system_pods.go:89] "csi-hostpath-resizer-0" [2fc7fc41-f556-49a3-9922-73e16c67463a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 10:56:54.593211  141007 system_pods.go:89] "csi-hostpathplugin-f6btq" [100f5d1e-1127-4214-85ef-49474a262460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 10:56:54.593218  141007 system_pods.go:89] "etcd-addons-892779" [96816209-d6e4-41ff-a843-abb2caaa92f5] Running
	I1028 10:56:54.593227  141007 system_pods.go:89] "kube-apiserver-addons-892779" [fa0527e2-3605-47fa-8d62-ed7a49ae6a8d] Running
	I1028 10:56:54.593234  141007 system_pods.go:89] "kube-controller-manager-addons-892779" [5e473b38-93df-40f1-a084-586bce117796] Running
	I1028 10:56:54.593242  141007 system_pods.go:89] "kube-ingress-dns-minikube" [acf71611-aacb-4b72-aeb9-595f2d5717c0] Running
	I1028 10:56:54.593250  141007 system_pods.go:89] "kube-proxy-pgxl7" [3c85b65a-0083-48cd-8852-3ea8b3024bf3] Running
	I1028 10:56:54.593257  141007 system_pods.go:89] "kube-scheduler-addons-892779" [402a10fc-e775-4cea-84a4-6fec7e060c00] Running
	I1028 10:56:54.593266  141007 system_pods.go:89] "metrics-server-84c5f94fbc-748cp" [863279c2-0842-48b9-8840-31351b7a7bbc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 10:56:54.593278  141007 system_pods.go:89] "nvidia-device-plugin-daemonset-n492w" [17f0e2c2-6431-4f75-84a5-c4ccbb03c69f] Running
	I1028 10:56:54.593291  141007 system_pods.go:89] "registry-66c9cd494c-rnl5j" [5e520c13-81a2-4ebf-ab10-4fecd61cddd7] Running
	I1028 10:56:54.593297  141007 system_pods.go:89] "registry-proxy-7cjwq" [55548851-badf-40ba-a4b8-18d300af90f3] Running
	I1028 10:56:54.593307  141007 system_pods.go:89] "snapshot-controller-56fcc65765-82xbk" [f1f9cf16-2dec-41b4-9963-e49927080375] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.593321  141007 system_pods.go:89] "snapshot-controller-56fcc65765-mbt5s" [23af40a2-2f3d-4775-8bec-16437d1294f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.593327  141007 system_pods.go:89] "storage-provisioner" [5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a] Running
	I1028 10:56:54.593338  141007 system_pods.go:126] duration metric: took 210.110388ms to wait for k8s-apps to be running ...
	I1028 10:56:54.593349  141007 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 10:56:54.593396  141007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 10:56:54.620361  141007 system_svc.go:56] duration metric: took 27.001371ms WaitForService to wait for kubelet
	I1028 10:56:54.620398  141007 kubeadm.go:582] duration metric: took 49.720961891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 10:56:54.620421  141007 node_conditions.go:102] verifying NodePressure condition ...
	I1028 10:56:54.692631  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:54.776687  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:54.781705  141007 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 10:56:54.781731  141007 node_conditions.go:123] node cpu capacity is 2
	I1028 10:56:54.781744  141007 node_conditions.go:105] duration metric: took 161.31783ms to run NodePressure ...
	I1028 10:56:54.781757  141007 start.go:241] waiting for startup goroutines ...
	I1028 10:56:55.083056  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:55.193651  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:55.276306  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:55.583975  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:55.693766  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:55.777895  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:56.083420  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:56.192676  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:56.276330  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:56.584428  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:56.692881  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:56.776453  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:57.084148  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:57.192813  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:57.276679  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:57.583340  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:57.692145  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:57.776595  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:58.083373  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:58.192389  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:58.277641  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:58.583470  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:58.692898  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:58.914972  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:59.084643  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:59.192696  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:59.283814  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:59.584408  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:59.692673  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:59.776824  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:00.083625  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:00.192534  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:00.276438  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:00.585468  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:00.692565  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:00.776871  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:01.084545  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:01.192581  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:01.280875  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:01.584267  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:01.692355  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:01.777632  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:02.084478  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:02.194419  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:02.278218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:02.583474  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:02.694502  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:02.776918  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:03.083761  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:03.192967  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:03.276499  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:03.878769  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:03.879592  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:03.879676  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:04.083917  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:04.192945  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:04.277550  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:04.584188  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:04.692993  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:04.776907  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:05.083672  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:05.196850  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:05.276613  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:05.583439  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:05.694452  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:05.777080  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:06.089568  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:06.192524  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:06.276968  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:06.585136  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:06.694206  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:06.777921  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:07.083751  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:07.192887  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:07.276677  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:07.583744  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:07.692762  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:07.776542  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:08.083161  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:08.193032  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:08.276755  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:08.584530  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:08.692262  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:08.788827  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:09.083148  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:09.194362  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:09.278915  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:09.584245  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:09.700734  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:09.802603  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:10.084297  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:10.192575  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:10.277280  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:10.584141  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:10.703753  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:10.777603  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:11.084286  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:11.193191  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:11.276788  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:11.708223  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:11.811592  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:11.811878  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:12.084398  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:12.192998  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:12.279791  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:12.583244  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:12.691614  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:12.776065  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:13.084326  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:13.192371  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:13.277820  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:13.584733  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:13.698227  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:13.795473  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:14.083977  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:14.192815  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:14.277226  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:14.583164  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:14.692528  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:14.776655  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:15.084013  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:15.192759  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:15.276422  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:15.583623  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:15.692735  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:15.780542  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:16.084804  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:16.193654  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:16.277376  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:16.583932  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:16.692811  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:16.777384  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:17.084099  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:17.192930  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:17.278169  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:17.584160  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:17.693651  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:17.777044  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:18.085097  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:18.192318  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:18.277351  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:18.583944  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:18.693896  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:18.781372  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:19.084721  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:19.196147  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:19.299380  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:19.584220  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:19.693675  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:19.778218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:20.085334  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:20.204506  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:20.283654  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:20.584516  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:20.696276  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:20.777308  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:21.083988  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:21.193325  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:21.277124  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:21.585993  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:21.696310  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:21.821988  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:22.086963  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:22.192786  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:22.276697  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:22.590465  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:22.692980  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:22.777509  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:23.083397  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:23.192305  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:23.277823  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:23.584596  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:23.697423  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:23.777271  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:24.084330  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:24.192371  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:24.278165  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:24.583545  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:24.692210  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:24.777081  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:25.083912  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:25.192884  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:25.276518  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:25.584699  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:25.692317  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:25.776896  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:26.084012  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:26.193255  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:26.277332  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:26.583659  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:26.692528  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:26.777105  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:27.087223  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:27.192159  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:27.277412  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:27.583121  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:27.692641  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:27.776264  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:28.085589  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:28.192176  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:28.277541  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:28.583532  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:28.692445  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:28.786341  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:29.084467  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:29.192914  141007 kapi.go:107] duration metric: took 1m15.005065666s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 10:57:29.277646  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:29.584413  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:29.776547  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:30.083442  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:30.277322  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:30.584282  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:30.776934  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:31.083381  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:31.276968  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:31.584185  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:31.777057  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:32.084738  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:32.277898  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:32.586738  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:32.777269  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:33.083877  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:33.277096  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:33.584430  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:33.779412  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:34.084276  141007 kapi.go:107] duration metric: took 1m16.504448256s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 10:57:34.086123  141007 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-892779 cluster.
	I1028 10:57:34.087701  141007 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 10:57:34.089076  141007 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 10:57:34.276495  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:34.777139  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:35.276753  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:35.777692  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:36.276680  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:36.777135  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:37.278303  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:37.776940  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:38.278485  141007 kapi.go:107] duration metric: took 1m22.506680703s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 10:57:38.280588  141007 out.go:177] * Enabled addons: default-storageclass, ingress-dns, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, inspektor-gadget, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1028 10:57:38.282251  141007 addons.go:510] duration metric: took 1m33.382788307s for enable addons: enabled=[default-storageclass ingress-dns amd-gpu-device-plugin cloud-spanner nvidia-device-plugin inspektor-gadget metrics-server storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1028 10:57:38.282300  141007 start.go:246] waiting for cluster config update ...
	I1028 10:57:38.282322  141007 start.go:255] writing updated cluster config ...
	I1028 10:57:38.282578  141007 ssh_runner.go:195] Run: rm -f paused
	I1028 10:57:38.337860  141007 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 10:57:38.339804  141007 out.go:177] * Done! kubectl is now configured to use "addons-892779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.885932841Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e2546b31e218e78c0a0697e14dce01182ac744cdba7a0e0a1ddac9a24315d3fb,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-dv9b6,Uid:2e92733f-d380-42bd-b6ae-3b7e7fdafb42,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113244920907768,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-dv9b6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e92733f-d380-42bd-b6ae-3b7e7fdafb42,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:00:44.603850371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:2818a832-80db-43ce-ad06-1d48dd9ab54e,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1730113105925301130,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:58:25.606773979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&PodSandboxMetadata{Name:busybox,Uid:da189efe-7ffa-4bdf-87b1-c414bec80098,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113059239085233,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-87b1-c414bec80098,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:57:38.929622757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4d9bd30a315373919
a8a3860cdb4da0272c4f21a6f9c1db920630b063709bfd,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-5f85ff4588-75jz7,Uid:3007a48d-9d0d-4d58-8108-a70d9d3ee0c0,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113039095630932,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-75jz7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3007a48d-9d0d-4d58-8108-a70d9d3ee0c0,pod-template-hash: 5f85ff4588,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:56:13.946433432Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-748cp,Uid:863279c2-0842-48b9-8840-31351b7a7bbc,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1730112971905176310,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:56:11.288088277Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d7b08ce55325cb76d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112971021026498,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[stri
ng]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T10:56:10.712049266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15c7abb0b8dfe1e38584037a10dc339e57c4bec16c077b4e4c4526fd29cad75c,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:acf71611-aacb-4b72-aeb9-595f2d5717c0,Namespace:
kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112970696328398,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf71611-aacb-4b72-aeb9-595f2d5717c0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingres
s-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-10-28T10:56:09.979868845Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-77nkc,Uid:9525ccf5-beb0-48e3-9612-30e31a087ca2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112967819458737,Labels:map[string]string{controller-revision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:56:07.488619812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1df353cc353110
f318c4c2bc25bff2565de933e16806d45b9861a1560562f5a4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6ck8n,Uid:22aed405-7302-480a-b873-02aecdc8c874,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112965141473375,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:56:04.802562490Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-pgxl7,Uid:3c85b65a-0083-48cd-8852-3ea8b3024bf3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112964935045377,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pg
xl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T10:56:04.592394323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-892779,Uid:d2d358cbec9d53960f2e8c2a073980ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112953818071458,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.106:8443,kubernetes.io/config.hash: d2d358cbec9d53960f2e8c2a073980ca,kubernetes.io/config.seen: 2024-1
0-28T10:55:53.340275455Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&PodSandboxMetadata{Name:etcd-addons-892779,Uid:ca7012c93fee37dd1aba5ee6cd983cc2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112953812455154,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.106:2379,kubernetes.io/config.hash: ca7012c93fee37dd1aba5ee6cd983cc2,kubernetes.io/config.seen: 2024-10-28T10:55:53.340274093Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-892779,Uid:306263e3cf96cfdffef24db7e5f
787e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112953797126168,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 306263e3cf96cfdffef24db7e5f787e3,kubernetes.io/config.seen: 2024-10-28T10:55:53.340272791Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-892779,Uid:60773ad812876b76f1cfd70b128a82db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730112953796515781,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 60773ad812876b76f1cfd70b128a82db,kubernetes.io/config.seen: 2024-10-28T10:55:53.340269217Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f2e7d80d-c4cd-463e-895a-eb4de7f91480 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.886875307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ca2a2a1-367f-4c27-88e4-f3d524d524d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.886981554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ca2a2a1-367f-4c27-88e4-f3d524d524d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.887250788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-87b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e264e9e21f22463afc10b819bdfc0701d17c57da087da3aab9f87795f08e8eb,PodSandboxId:e4d9bd30a315373919a8a3860cdb4da0272c4f21a6f9c1db920630b063709bfd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730113048359278892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-75jz7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3007a48d-9d0d-4d58-8108-a70d9d3ee0c0,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2b
b6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0732bda83048a4e42ff522011a52d45179fc7a4dddebb5c2241f90d93b5a34a,PodSandboxId:15c7abb0b8dfe1e38584037a10dc339e57c4bec16c077b4e4c4526fd29cad75c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730112995593562055,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf71611-aacb-4b72-aeb9-595f2d5717c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-s
erver/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76d1eb32007dfea4f
b937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff2565de933e16806
d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a50d3
d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e35168339bc921a854f
47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064a9faa86b183
8032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=1ca2a2a1-367f-4c27-88e4-f3d524d524d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.893791534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b36943fe-66fa-4eac-ad99-a64468b8a8d5 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.893856036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b36943fe-66fa-4eac-ad99-a64468b8a8d5 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.895146131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9791f48a-d8bd-493c-b68e-b5aedcee6828 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.896827906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113245896801092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587566,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9791f48a-d8bd-493c-b68e-b5aedcee6828 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.897555608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17a4529a-3975-4680-abf2-ba113606f328 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.897633265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17a4529a-3975-4680-abf2-ba113606f328 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.898026353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-87b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e264e9e21f22463afc10b819bdfc0701d17c57da087da3aab9f87795f08e8eb,PodSandboxId:e4d9bd30a315373919a8a3860cdb4da0272c4f21a6f9c1db920630b063709bfd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730113048359278892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-75jz7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3007a48d-9d0d-4d58-8108-a70d9d3ee0c0,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cba1268e3a63a79877a04d7b19605773146d47ba5cc07a5db84ec95eccda958c,PodSandboxId:599831c19911efbf880894a21c7bcf330651cc112cdfaaf308ad9b293a735b8b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730113032691785491,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-29rj2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab507d62-817c-492b-b6e3-d5c760885e88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a1412d0aa5e2a52dea5c03a369266f4b06c2edab4d4b040aae9fdd73f05131,PodSandboxId:d0ae965a06c22f8e875e3083cb54ab15585d9c267937aa02b373472bef86fcc3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730113031952057008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dl75x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 058d2e5e-a18a-4761-8b43-de37a4964aa0,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0732bda83048a4e42ff522011a52d45179fc7a4dddebb5c2241f90d93b5a34a,PodSandboxId:15c7abb0b8dfe1e38584037a10dc339e57c4bec16c077b4e4c4526fd29cad75c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730112995593562055,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf71611-aacb-4b72-aeb9-595f2d5717c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k
8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76
d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff
2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e351
68339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=17a4529a-3975-4680-abf2-ba113606f328 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.934401947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c16c77c-d16c-4451-b912-b077c81be888 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.934478965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c16c77c-d16c-4451-b912-b077c81be888 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.937132393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49ee3db7-eafe-47b6-8457-f3017272680f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.938297187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113245938268748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587566,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49ee3db7-eafe-47b6-8457-f3017272680f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.938872606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bc92ae2-28d8-4df3-b438-fd7ebce1e8cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.938998323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bc92ae2-28d8-4df3-b438-fd7ebce1e8cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.939332737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-87b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e264e9e21f22463afc10b819bdfc0701d17c57da087da3aab9f87795f08e8eb,PodSandboxId:e4d9bd30a315373919a8a3860cdb4da0272c4f21a6f9c1db920630b063709bfd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730113048359278892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-75jz7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3007a48d-9d0d-4d58-8108-a70d9d3ee0c0,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cba1268e3a63a79877a04d7b19605773146d47ba5cc07a5db84ec95eccda958c,PodSandboxId:599831c19911efbf880894a21c7bcf330651cc112cdfaaf308ad9b293a735b8b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730113032691785491,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-29rj2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab507d62-817c-492b-b6e3-d5c760885e88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a1412d0aa5e2a52dea5c03a369266f4b06c2edab4d4b040aae9fdd73f05131,PodSandboxId:d0ae965a06c22f8e875e3083cb54ab15585d9c267937aa02b373472bef86fcc3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730113031952057008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dl75x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 058d2e5e-a18a-4761-8b43-de37a4964aa0,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0732bda83048a4e42ff522011a52d45179fc7a4dddebb5c2241f90d93b5a34a,PodSandboxId:15c7abb0b8dfe1e38584037a10dc339e57c4bec16c077b4e4c4526fd29cad75c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730112995593562055,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf71611-aacb-4b72-aeb9-595f2d5717c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k
8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76
d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff
2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e351
68339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=4bc92ae2-28d8-4df3-b438-fd7ebce1e8cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.974674377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a6960e9-a43c-4762-b019-d9eac5ab1925 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.974773718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a6960e9-a43c-4762-b019-d9eac5ab1925 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.978066404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e94dd6f5-c7c7-4398-b717-5519067075ae name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.979887916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113245979851953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587566,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e94dd6f5-c7c7-4398-b717-5519067075ae name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.986305387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c21b3115-6998-41ff-9a0e-10f8d3652b92 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.986370393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c21b3115-6998-41ff-9a0e-10f8d3652b92 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:00:45 addons-892779 crio[665]: time="2024-10-28 11:00:45.986767139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-87b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e264e9e21f22463afc10b819bdfc0701d17c57da087da3aab9f87795f08e8eb,PodSandboxId:e4d9bd30a315373919a8a3860cdb4da0272c4f21a6f9c1db920630b063709bfd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1730113048359278892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-75jz7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3007a48d-9d0d-4d58-8108-a70d9d3ee0c0,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cba1268e3a63a79877a04d7b19605773146d47ba5cc07a5db84ec95eccda958c,PodSandboxId:599831c19911efbf880894a21c7bcf330651cc112cdfaaf308ad9b293a735b8b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1730113032691785491,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-29rj2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab507d62-817c-492b-b6e3-d5c760885e88,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a1412d0aa5e2a52dea5c03a369266f4b06c2edab4d4b040aae9fdd73f05131,PodSandboxId:d0ae965a06c22f8e875e3083cb54ab15585d9c267937aa02b373472bef86fcc3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1730113031952057008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dl75x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 058d2e5e-a18a-4761-8b43-de37a4964aa0,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0732bda83048a4e42ff522011a52d45179fc7a4dddebb5c2241f90d93b5a34a,PodSandboxId:15c7abb0b8dfe1e38584037a10dc339e57c4bec16c077b4e4c4526fd29cad75c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1730112995593562055,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf71611-aacb-4b72-aeb9-595f2d5717c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k
8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76
d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff
2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e351
68339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=c21b3115-6998-41ff-9a0e-10f8d3652b92 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6bbfd469316d6       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   56c70124ad7b9       nginx
	c943bfd8c1add       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   c15cc11944ae6       busybox
	2e264e9e21f22       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e4d9bd30a3153       ingress-nginx-controller-5f85ff4588-75jz7
	cba1268e3a63a       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   599831c19911e       ingress-nginx-admission-patch-29rj2
	27a1412d0aa5e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   d0ae965a06c22       ingress-nginx-admission-create-dl75x
	9d766987fd8a8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago       Running             amd-gpu-device-plugin     0                   7ce71747e8fd7       amd-gpu-device-plugin-77nkc
	c0732bda83048       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   15c7abb0b8dfe       kube-ingress-dns-minikube
	1c42ee46198bd       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   4123bc4c5f2a8       metrics-server-84c5f94fbc-748cp
	1bc876b3fa526       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9d7b08ce55325       storage-provisioner
	52148186558e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   1df353cc35311       coredns-7c65d6cfc9-6ck8n
	4d910baa7d462       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   49a4b2cf73cca       kube-proxy-pgxl7
	ddfe3ef897e6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   04a8e192f2825       etcd-addons-892779
	064a9faa86b18       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago       Running             kube-apiserver            0                   3f911272f4203       kube-apiserver-addons-892779
	84a50d3d1e447       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago       Running             kube-scheduler            0                   7c50b2a8ac7b3       kube-scheduler-addons-892779
	e2187e3516833       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago       Running             kube-controller-manager   0                   ad6a958974299       kube-controller-manager-addons-892779
	
	
	==> coredns [52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982] <==
	[INFO] 10.244.0.7:46907 - 32139 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000391208s
	[INFO] 10.244.0.7:46907 - 8866 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000086306s
	[INFO] 10.244.0.7:46907 - 14545 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000041652s
	[INFO] 10.244.0.7:46907 - 38489 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085234s
	[INFO] 10.244.0.7:46907 - 55183 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000029966s
	[INFO] 10.244.0.7:46907 - 21400 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000099816s
	[INFO] 10.244.0.7:46907 - 60154 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000043744s
	[INFO] 10.244.0.7:34736 - 44936 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112007s
	[INFO] 10.244.0.7:34736 - 44671 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00005494s
	[INFO] 10.244.0.7:35072 - 53896 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065226s
	[INFO] 10.244.0.7:35072 - 53652 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052373s
	[INFO] 10.244.0.7:55299 - 45709 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058225s
	[INFO] 10.244.0.7:55299 - 45520 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027128s
	[INFO] 10.244.0.7:52040 - 53021 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054396s
	[INFO] 10.244.0.7:52040 - 52826 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00003203s
	[INFO] 10.244.0.23:44569 - 21571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000501489s
	[INFO] 10.244.0.23:60086 - 51876 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000117509s
	[INFO] 10.244.0.23:38954 - 47169 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105666s
	[INFO] 10.244.0.23:44799 - 22400 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000246366s
	[INFO] 10.244.0.23:38522 - 53778 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000182546s
	[INFO] 10.244.0.23:55527 - 29578 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000250589s
	[INFO] 10.244.0.23:43016 - 12982 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001130984s
	[INFO] 10.244.0.23:36468 - 45838 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00153565s
	[INFO] 10.244.0.26:43544 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00033004s
	[INFO] 10.244.0.26:41735 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000223873s
	
	
	==> describe nodes <==
	Name:               addons-892779
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-892779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=addons-892779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T10_56_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-892779
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 10:55:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-892779
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:00:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 10:59:04 +0000   Mon, 28 Oct 2024 10:55:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 10:59:04 +0000   Mon, 28 Oct 2024 10:55:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 10:59:04 +0000   Mon, 28 Oct 2024 10:55:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 10:59:04 +0000   Mon, 28 Oct 2024 10:56:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    addons-892779
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e962354ae1a445f86f73c5d50c26841
	  System UUID:                8e962354-ae1a-445f-86f7-3c5d50c26841
	  Boot ID:                    109ad88a-d9b2-40ba-a8fe-b508dd97271e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-55bf9c44b4-dv9b6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-75jz7    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m33s
	  kube-system                 amd-gpu-device-plugin-77nkc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 coredns-7c65d6cfc9-6ck8n                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m42s
	  kube-system                 etcd-addons-892779                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m48s
	  kube-system                 kube-apiserver-addons-892779                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-addons-892779        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-proxy-pgxl7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-892779                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 metrics-server-84c5f94fbc-748cp              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m53s (x8 over 4m53s)  kubelet          Node addons-892779 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x8 over 4m53s)  kubelet          Node addons-892779 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x7 over 4m53s)  kubelet          Node addons-892779 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s                  kubelet          Node addons-892779 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                  kubelet          Node addons-892779 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                  kubelet          Node addons-892779 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m46s                  kubelet          Node addons-892779 status is now: NodeReady
	  Normal  RegisteredNode           4m43s                  node-controller  Node addons-892779 event: Registered Node addons-892779 in Controller
	  Normal  CIDRAssignmentFailed     4m43s                  cidrAllocator    Node addons-892779 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.089607] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.001334] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.179892] kauditd_printk_skb: 131 callbacks suppressed
	[  +9.281163] kauditd_printk_skb: 92 callbacks suppressed
	[ +10.928316] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.591371] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.069154] kauditd_printk_skb: 4 callbacks suppressed
	[Oct28 10:57] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.119476] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.261308] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.416894] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.348892] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.378789] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.256652] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.243666] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 10:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.528815] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.013248] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.485731] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.301666] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.556706] kauditd_printk_skb: 53 callbacks suppressed
	[ +21.459436] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 10:59] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.869945] kauditd_printk_skb: 7 callbacks suppressed
	[Oct28 11:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e] <==
	{"level":"warn","ts":"2024-10-28T10:57:03.449892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.155709ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:03.450012Z","caller":"traceutil/trace.go:171","msg":"trace[515150145] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1001; }","duration":"123.206918ms","start":"2024-10-28T10:57:03.326714Z","end":"2024-10-28T10:57:03.449921Z","steps":["trace[515150145] 'agreement among raft nodes before linearized reading'  (duration: 122.594215ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:03.450382Z","caller":"traceutil/trace.go:171","msg":"trace[1439441168] transaction","detail":"{read_only:false; response_revision:1001; number_of_response:1; }","duration":"176.992769ms","start":"2024-10-28T10:57:03.273372Z","end":"2024-10-28T10:57:03.450365Z","steps":["trace[1439441168] 'process raft request'  (duration: 175.774551ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:57:03.861815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.187022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:03.861916Z","caller":"traceutil/trace.go:171","msg":"trace[991144534] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1001; }","duration":"182.297719ms","start":"2024-10-28T10:57:03.679608Z","end":"2024-10-28T10:57:03.861906Z","steps":["trace[991144534] 'range keys from in-memory index tree'  (duration: 182.143811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:57:03.862130Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.10578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:03.862172Z","caller":"traceutil/trace.go:171","msg":"trace[1342549661] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1001; }","duration":"290.15281ms","start":"2024-10-28T10:57:03.572013Z","end":"2024-10-28T10:57:03.862165Z","steps":["trace[1342549661] 'range keys from in-memory index tree'  (duration: 290.066027ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:11.693343Z","caller":"traceutil/trace.go:171","msg":"trace[841996832] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"191.31022ms","start":"2024-10-28T10:57:11.502018Z","end":"2024-10-28T10:57:11.693328Z","steps":["trace[841996832] 'process raft request'  (duration: 191.183504ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:11.693723Z","caller":"traceutil/trace.go:171","msg":"trace[836715530] linearizableReadLoop","detail":"{readStateIndex:1068; appliedIndex:1068; }","duration":"122.850036ms","start":"2024-10-28T10:57:11.570864Z","end":"2024-10-28T10:57:11.693714Z","steps":["trace[836715530] 'read index received'  (duration: 122.84571ms)","trace[836715530] 'applied index is now lower than readState.Index'  (duration: 3.876µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T10:57:11.693830Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.945281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:11.693879Z","caller":"traceutil/trace.go:171","msg":"trace[1462564819] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1034; }","duration":"123.013438ms","start":"2024-10-28T10:57:11.570860Z","end":"2024-10-28T10:57:11.693873Z","steps":["trace[1462564819] 'agreement among raft nodes before linearized reading'  (duration: 122.931146ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:11.697226Z","caller":"traceutil/trace.go:171","msg":"trace[1576052646] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"107.507643ms","start":"2024-10-28T10:57:11.589707Z","end":"2024-10-28T10:57:11.697214Z","steps":["trace[1576052646] 'process raft request'  (duration: 107.277096ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:58:02.187867Z","caller":"traceutil/trace.go:171","msg":"trace[527153399] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"350.011383ms","start":"2024-10-28T10:58:01.837840Z","end":"2024-10-28T10:58:02.187851Z","steps":["trace[527153399] 'process raft request'  (duration: 349.92303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:58:02.188070Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T10:58:01.837826Z","time spent":"350.168673ms","remote":"127.0.0.1:33176","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1305 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-28T10:58:02.188388Z","caller":"traceutil/trace.go:171","msg":"trace[624154602] linearizableReadLoop","detail":"{readStateIndex:1384; appliedIndex:1384; }","duration":"319.93912ms","start":"2024-10-28T10:58:01.868440Z","end":"2024-10-28T10:58:02.188379Z","steps":["trace[624154602] 'read index received'  (duration: 319.93669ms)","trace[624154602] 'applied index is now lower than readState.Index'  (duration: 1.999µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T10:58:02.188474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.049291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:58:02.188518Z","caller":"traceutil/trace.go:171","msg":"trace[1650348841] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1337; }","duration":"320.09806ms","start":"2024-10-28T10:58:01.868414Z","end":"2024-10-28T10:58:02.188512Z","steps":["trace[1650348841] 'agreement among raft nodes before linearized reading'  (duration: 320.036697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:58:02.188544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T10:58:01.868383Z","time spent":"320.155938ms","remote":"127.0.0.1:32868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-28T10:58:02.188685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.973153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-28T10:58:02.188722Z","caller":"traceutil/trace.go:171","msg":"trace[1340379804] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1337; }","duration":"289.009445ms","start":"2024-10-28T10:58:01.899708Z","end":"2024-10-28T10:58:02.188717Z","steps":["trace[1340379804] 'agreement among raft nodes before linearized reading'  (duration: 288.929858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:58:02.189199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.743396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-28T10:58:02.190903Z","caller":"traceutil/trace.go:171","msg":"trace[1361345971] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1337; }","duration":"125.443578ms","start":"2024-10-28T10:58:02.065443Z","end":"2024-10-28T10:58:02.190887Z","steps":["trace[1361345971] 'agreement among raft nodes before linearized reading'  (duration: 123.685479ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:58:36.631913Z","caller":"traceutil/trace.go:171","msg":"trace[1440678640] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"235.447347ms","start":"2024-10-28T10:58:36.396426Z","end":"2024-10-28T10:58:36.631873Z","steps":["trace[1440678640] 'process raft request'  (duration: 235.230015ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:58:51.658342Z","caller":"traceutil/trace.go:171","msg":"trace[654820603] transaction","detail":"{read_only:false; response_revision:1651; number_of_response:1; }","duration":"235.456005ms","start":"2024-10-28T10:58:51.422870Z","end":"2024-10-28T10:58:51.658326Z","steps":["trace[654820603] 'process raft request'  (duration: 235.356092ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:59:22.346860Z","caller":"traceutil/trace.go:171","msg":"trace[412176079] transaction","detail":"{read_only:false; response_revision:1753; number_of_response:1; }","duration":"275.340853ms","start":"2024-10-28T10:59:22.071508Z","end":"2024-10-28T10:59:22.346849Z","steps":["trace[412176079] 'process raft request'  (duration: 275.02163ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:00:46 up 5 min,  0 users,  load average: 0.47, 1.22, 0.66
	Linux addons-892779 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074] <==
	I1028 10:57:36.987511       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 10:57:37.013716       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1028 10:57:48.123577       1 conn.go:339] Error on socket receive: read tcp 192.168.39.106:8443->192.168.39.1:48188: use of closed network connection
	E1028 10:57:48.311146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.106:8443->192.168.39.1:48200: use of closed network connection
	I1028 10:57:57.624007       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.48.229"}
	I1028 10:58:03.447192       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 10:58:04.490332       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 10:58:25.390292       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 10:58:25.648728       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.106.12"}
	E1028 10:58:44.334362       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1028 10:58:58.212703       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 10:59:23.200424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.200504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.227437       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.227492       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.239639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.239700       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.279238       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.279293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.414209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.414332       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 10:59:24.227700       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 10:59:24.414895       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 10:59:24.426878       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 11:00:44.765315       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.58.42"}
	
	
	==> kube-controller-manager [e2187e35168339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52] <==
	I1028 10:59:34.482284       1 shared_informer.go:320] Caches are synced for garbage collector
	W1028 10:59:38.879615       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 10:59:38.879673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 10:59:40.343331       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 10:59:40.343383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 10:59:43.562364       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 10:59:43.562537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 10:59:55.825617       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 10:59:55.825843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:00:00.038361       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:00:00.038462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:00:05.504659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:00:05.504815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:00:07.780896       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:00:07.781068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:00:27.912457       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:00:27.912535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:00:37.563611       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:00:37.564015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:00:41.756711       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:00:41.756767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1028 11:00:44.588639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.230117ms"
	I1028 11:00:44.622553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.849874ms"
	I1028 11:00:44.623054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="185.053µs"
	I1028 11:00:44.627905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.209µs"
	
	
	==> kube-proxy [4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 10:56:06.297114       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 10:56:06.327429       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E1028 10:56:06.327586       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 10:56:06.448807       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 10:56:06.448839       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 10:56:06.448875       1 server_linux.go:169] "Using iptables Proxier"
	I1028 10:56:06.453750       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 10:56:06.454111       1 server.go:483] "Version info" version="v1.31.2"
	I1028 10:56:06.454124       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:56:06.455610       1 config.go:199] "Starting service config controller"
	I1028 10:56:06.455626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 10:56:06.455657       1 config.go:105] "Starting endpoint slice config controller"
	I1028 10:56:06.455661       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 10:56:06.456340       1 config.go:328] "Starting node config controller"
	I1028 10:56:06.456352       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 10:56:06.557878       1 shared_informer.go:320] Caches are synced for node config
	I1028 10:56:06.557930       1 shared_informer.go:320] Caches are synced for service config
	I1028 10:56:06.558000       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7] <==
	W1028 10:55:57.907526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 10:55:57.909337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:57.915393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 10:55:57.915584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:57.958210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:57.958300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.031173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 10:55:58.031226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.081308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:58.081445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.081871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 10:55:58.081933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.175132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:58.175182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.204982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 10:55:58.205110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.258721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:58.258816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.276427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 10:55:58.276750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.280041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 10:55:58.280150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.281844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 10:55:58.281910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1028 10:56:00.267199       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:00:40 addons-892779 kubelet[1203]: E1028 11:00:40.365410    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113240364769914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587566,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:00:40 addons-892779 kubelet[1203]: E1028 11:00:40.365855    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113240364769914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587566,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604534    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="csi-external-health-monitor-controller"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604599    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23af40a2-2f3d-4775-8bec-16437d1294f9" containerName="volume-snapshot-controller"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604609    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="596078c0-e9e3-4da9-99b7-fcf2ffb9ffb4" containerName="csi-attacher"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604615    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="liveness-probe"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604623    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37146935-06b9-41de-99c2-dec4e6254a90" containerName="task-pv-container"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604629    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1f9cf16-2dec-41b4-9963-e49927080375" containerName="volume-snapshot-controller"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604636    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fc7fc41-f556-49a3-9922-73e16c67463a" containerName="csi-resizer"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604642    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="hostpath"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604648    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="csi-provisioner"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604655    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="csi-snapshotter"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: E1028 11:00:44.604661    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="node-driver-registrar"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604714    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="csi-snapshotter"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604721    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f9cf16-2dec-41b4-9963-e49927080375" containerName="volume-snapshot-controller"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604726    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="23af40a2-2f3d-4775-8bec-16437d1294f9" containerName="volume-snapshot-controller"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604731    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc7fc41-f556-49a3-9922-73e16c67463a" containerName="csi-resizer"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604737    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="liveness-probe"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604741    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="596078c0-e9e3-4da9-99b7-fcf2ffb9ffb4" containerName="csi-attacher"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604747    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="37146935-06b9-41de-99c2-dec4e6254a90" containerName="task-pv-container"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604754    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="hostpath"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604760    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="csi-provisioner"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604767    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="csi-external-health-monitor-controller"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.604773    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f5d1e-1127-4214-85ef-49474a262460" containerName="node-driver-registrar"
	Oct 28 11:00:44 addons-892779 kubelet[1203]: I1028 11:00:44.724265    1203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxfm\" (UniqueName: \"kubernetes.io/projected/2e92733f-d380-42bd-b6ae-3b7e7fdafb42-kube-api-access-dnxfm\") pod \"hello-world-app-55bf9c44b4-dv9b6\" (UID: \"2e92733f-d380-42bd-b6ae-3b7e7fdafb42\") " pod="default/hello-world-app-55bf9c44b4-dv9b6"
	
	
	==> storage-provisioner [1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3] <==
	I1028 10:56:14.457485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 10:56:14.526420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 10:56:14.526483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 10:56:14.561540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 10:56:14.564476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-892779_92f07eb9-7e6b-4755-9966-4a2a450cacc0!
	I1028 10:56:14.577854       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afa8d239-0350-46d2-83fe-2e4e4ea51edf", APIVersion:"v1", ResourceVersion:"761", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-892779_92f07eb9-7e6b-4755-9966-4a2a450cacc0 became leader
	I1028 10:56:14.766496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-892779_92f07eb9-7e6b-4755-9966-4a2a450cacc0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-892779 -n addons-892779
helpers_test.go:261: (dbg) Run:  kubectl --context addons-892779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-dv9b6 ingress-nginx-admission-create-dl75x ingress-nginx-admission-patch-29rj2
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-892779 describe pod hello-world-app-55bf9c44b4-dv9b6 ingress-nginx-admission-create-dl75x ingress-nginx-admission-patch-29rj2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-892779 describe pod hello-world-app-55bf9c44b4-dv9b6 ingress-nginx-admission-create-dl75x ingress-nginx-admission-patch-29rj2: exit status 1 (72.237474ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-dv9b6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-892779/192.168.39.106
	Start Time:       Mon, 28 Oct 2024 11:00:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dnxfm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dnxfm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-dv9b6 to addons-892779
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dl75x" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-29rj2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-892779 describe pod hello-world-app-55bf9c44b4-dv9b6 ingress-nginx-admission-create-dl75x ingress-nginx-admission-patch-29rj2: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable ingress-dns --alsologtostderr -v=1: (1.431606872s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable ingress --alsologtostderr -v=1: (7.750786724s)
--- FAIL: TestAddons/parallel/Ingress (151.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.648124ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-748cp" [863279c2-0842-48b9-8840-31351b7a7bbc] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004785969s
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (97.710087ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-892779, age: 2m4.858908326s

                                                
                                                
** /stderr **
I1028 10:58:02.860942  140303 retry.go:31] will retry after 3.647406808s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (73.344169ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6ck8n, age: 2m2.580069256s

                                                
                                                
** /stderr **
I1028 10:58:06.582000  140303 retry.go:31] will retry after 3.429763131s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (67.864747ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 2m3.078439823s

                                                
                                                
** /stderr **
I1028 10:58:10.080419  140303 retry.go:31] will retry after 8.666384627s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (79.984161ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 2m11.825618391s

                                                
                                                
** /stderr **
I1028 10:58:18.827579  140303 retry.go:31] will retry after 5.885783562s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (69.838122ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 2m17.781300511s

                                                
                                                
** /stderr **
I1028 10:58:24.783756  140303 retry.go:31] will retry after 11.547523996s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (72.713513ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 2m29.402277703s

                                                
                                                
** /stderr **
I1028 10:58:36.405138  140303 retry.go:31] will retry after 13.305621233s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (69.94229ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 2m42.778392232s

                                                
                                                
** /stderr **
I1028 10:58:49.781731  140303 retry.go:31] will retry after 27.583655565s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (151.003215ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 3m10.514776721s

                                                
                                                
** /stderr **
I1028 10:59:17.516952  140303 retry.go:31] will retry after 45.563956111s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (65.911992ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 3m56.149066897s

                                                
                                                
** /stderr **
I1028 11:00:03.151222  140303 retry.go:31] will retry after 1m11.144525672s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (64.702671ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 5m7.359589076s

                                                
                                                
** /stderr **
I1028 11:01:14.361746  140303 retry.go:31] will retry after 59.271918957s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (67.64394ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 6m6.703491206s

                                                
                                                
** /stderr **
I1028 11:02:13.705773  140303 retry.go:31] will retry after 34.653291596s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (70.622928ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 6m41.427881189s

                                                
                                                
** /stderr **
I1028 11:02:48.430511  140303 retry.go:31] will retry after 55.144266501s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-892779 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-892779 top pods -n kube-system: exit status 1 (66.107312ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-77nkc, age: 7m36.643352421s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-892779 -n addons-892779
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 logs -n 25: (1.241728816s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-114118                                                                     | download-only-114118 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| delete  | -p download-only-553455                                                                     | download-only-553455 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-110570 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | binary-mirror-110570                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43021                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-110570                                                                     | binary-mirror-110570 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| addons  | enable dashboard -p                                                                         | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | addons-892779                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | addons-892779                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-892779 --wait=true                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:57 UTC | 28 Oct 24 10:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:57 UTC | 28 Oct 24 10:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:57 UTC | 28 Oct 24 10:57 UTC |
	|         | -p addons-892779                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-892779 ip                                                                            | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-892779 ssh cat                                                                       | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:58 UTC |
	|         | /opt/local-path-provisioner/pvc-89c5613b-7edc-42a1-8a07-f72dc621843c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC | 28 Oct 24 10:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-892779 ssh curl -s                                                                   | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892779 addons                                                                        | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 10:59 UTC | 28 Oct 24 10:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-892779 ip                                                                            | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 11:00 UTC | 28 Oct 24 11:00 UTC |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 11:00 UTC | 28 Oct 24 11:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892779 addons disable                                                                | addons-892779        | jenkins | v1.34.0 | 28 Oct 24 11:00 UTC | 28 Oct 24 11:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:55:19
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:55:19.338822  141007 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:55:19.338962  141007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:19.338974  141007 out.go:358] Setting ErrFile to fd 2...
	I1028 10:55:19.338979  141007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:19.339177  141007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 10:55:19.339814  141007 out.go:352] Setting JSON to false
	I1028 10:55:19.340729  141007 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2262,"bootTime":1730110657,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:55:19.340797  141007 start.go:139] virtualization: kvm guest
	I1028 10:55:19.343088  141007 out.go:177] * [addons-892779] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 10:55:19.345140  141007 notify.go:220] Checking for updates...
	I1028 10:55:19.345159  141007 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 10:55:19.346720  141007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:55:19.348489  141007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 10:55:19.349927  141007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:19.351444  141007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 10:55:19.353145  141007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 10:55:19.354877  141007 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:55:19.387632  141007 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 10:55:19.389160  141007 start.go:297] selected driver: kvm2
	I1028 10:55:19.389182  141007 start.go:901] validating driver "kvm2" against <nil>
	I1028 10:55:19.389195  141007 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 10:55:19.389982  141007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:55:19.390084  141007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 10:55:19.405550  141007 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 10:55:19.405607  141007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:55:19.405863  141007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 10:55:19.405898  141007 cni.go:84] Creating CNI manager for ""
	I1028 10:55:19.405939  141007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:55:19.405947  141007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 10:55:19.406007  141007 start.go:340] cluster config:
	{Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:55:19.406098  141007 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:55:19.407968  141007 out.go:177] * Starting "addons-892779" primary control-plane node in "addons-892779" cluster
	I1028 10:55:19.409604  141007 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:55:19.409657  141007 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:55:19.409665  141007 cache.go:56] Caching tarball of preloaded images
	I1028 10:55:19.409763  141007 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 10:55:19.409777  141007 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 10:55:19.410080  141007 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/config.json ...
	I1028 10:55:19.410102  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/config.json: {Name:mka098263b9c5fb67d1a426a55772f1cc3aa82ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:19.410267  141007 start.go:360] acquireMachinesLock for addons-892779: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 10:55:19.410315  141007 start.go:364] duration metric: took 33.953µs to acquireMachinesLock for "addons-892779"
	I1028 10:55:19.410332  141007 start.go:93] Provisioning new machine with config: &{Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 10:55:19.410394  141007 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 10:55:19.412274  141007 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1028 10:55:19.412424  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:55:19.412479  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:55:19.428108  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I1028 10:55:19.428696  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:55:19.429378  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:55:19.429401  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:55:19.429783  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:55:19.429987  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:19.430139  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:19.430294  141007 start.go:159] libmachine.API.Create for "addons-892779" (driver="kvm2")
	I1028 10:55:19.430328  141007 client.go:168] LocalClient.Create starting
	I1028 10:55:19.430372  141007 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 10:55:19.577405  141007 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 10:55:19.662594  141007 main.go:141] libmachine: Running pre-create checks...
	I1028 10:55:19.662617  141007 main.go:141] libmachine: (addons-892779) Calling .PreCreateCheck
	I1028 10:55:19.663165  141007 main.go:141] libmachine: (addons-892779) Calling .GetConfigRaw
	I1028 10:55:19.663569  141007 main.go:141] libmachine: Creating machine...
	I1028 10:55:19.663584  141007 main.go:141] libmachine: (addons-892779) Calling .Create
	I1028 10:55:19.663710  141007 main.go:141] libmachine: (addons-892779) Creating KVM machine...
	I1028 10:55:19.664912  141007 main.go:141] libmachine: (addons-892779) DBG | found existing default KVM network
	I1028 10:55:19.665694  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:19.665485  141029 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1028 10:55:19.665722  141007 main.go:141] libmachine: (addons-892779) DBG | created network xml: 
	I1028 10:55:19.665735  141007 main.go:141] libmachine: (addons-892779) DBG | <network>
	I1028 10:55:19.665743  141007 main.go:141] libmachine: (addons-892779) DBG |   <name>mk-addons-892779</name>
	I1028 10:55:19.665751  141007 main.go:141] libmachine: (addons-892779) DBG |   <dns enable='no'/>
	I1028 10:55:19.665761  141007 main.go:141] libmachine: (addons-892779) DBG |   
	I1028 10:55:19.665770  141007 main.go:141] libmachine: (addons-892779) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 10:55:19.665781  141007 main.go:141] libmachine: (addons-892779) DBG |     <dhcp>
	I1028 10:55:19.665791  141007 main.go:141] libmachine: (addons-892779) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 10:55:19.665806  141007 main.go:141] libmachine: (addons-892779) DBG |     </dhcp>
	I1028 10:55:19.665817  141007 main.go:141] libmachine: (addons-892779) DBG |   </ip>
	I1028 10:55:19.665827  141007 main.go:141] libmachine: (addons-892779) DBG |   
	I1028 10:55:19.665833  141007 main.go:141] libmachine: (addons-892779) DBG | </network>
	I1028 10:55:19.665838  141007 main.go:141] libmachine: (addons-892779) DBG | 
	I1028 10:55:19.671227  141007 main.go:141] libmachine: (addons-892779) DBG | trying to create private KVM network mk-addons-892779 192.168.39.0/24...
	I1028 10:55:19.739438  141007 main.go:141] libmachine: (addons-892779) DBG | private KVM network mk-addons-892779 192.168.39.0/24 created
	I1028 10:55:19.739475  141007 main.go:141] libmachine: (addons-892779) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779 ...
	I1028 10:55:19.739499  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:19.739395  141029 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:19.739601  141007 main.go:141] libmachine: (addons-892779) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 10:55:19.739635  141007 main.go:141] libmachine: (addons-892779) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 10:55:20.004738  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:20.004567  141029 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa...
	I1028 10:55:20.321771  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:20.321603  141029 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/addons-892779.rawdisk...
	I1028 10:55:20.321800  141007 main.go:141] libmachine: (addons-892779) DBG | Writing magic tar header
	I1028 10:55:20.321814  141007 main.go:141] libmachine: (addons-892779) DBG | Writing SSH key tar header
	I1028 10:55:20.321823  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:20.321724  141029 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779 ...
	I1028 10:55:20.321835  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779
	I1028 10:55:20.321944  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779 (perms=drwx------)
	I1028 10:55:20.321973  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 10:55:20.321985  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 10:55:20.321996  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:20.322008  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 10:55:20.322016  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 10:55:20.322022  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home/jenkins
	I1028 10:55:20.322028  141007 main.go:141] libmachine: (addons-892779) DBG | Checking permissions on dir: /home
	I1028 10:55:20.322036  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 10:55:20.322052  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 10:55:20.322066  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 10:55:20.322076  141007 main.go:141] libmachine: (addons-892779) DBG | Skipping /home - not owner
	I1028 10:55:20.322090  141007 main.go:141] libmachine: (addons-892779) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 10:55:20.322101  141007 main.go:141] libmachine: (addons-892779) Creating domain...
	I1028 10:55:20.323218  141007 main.go:141] libmachine: (addons-892779) define libvirt domain using xml: 
	I1028 10:55:20.323245  141007 main.go:141] libmachine: (addons-892779) <domain type='kvm'>
	I1028 10:55:20.323252  141007 main.go:141] libmachine: (addons-892779)   <name>addons-892779</name>
	I1028 10:55:20.323257  141007 main.go:141] libmachine: (addons-892779)   <memory unit='MiB'>4000</memory>
	I1028 10:55:20.323263  141007 main.go:141] libmachine: (addons-892779)   <vcpu>2</vcpu>
	I1028 10:55:20.323271  141007 main.go:141] libmachine: (addons-892779)   <features>
	I1028 10:55:20.323277  141007 main.go:141] libmachine: (addons-892779)     <acpi/>
	I1028 10:55:20.323283  141007 main.go:141] libmachine: (addons-892779)     <apic/>
	I1028 10:55:20.323288  141007 main.go:141] libmachine: (addons-892779)     <pae/>
	I1028 10:55:20.323292  141007 main.go:141] libmachine: (addons-892779)     
	I1028 10:55:20.323297  141007 main.go:141] libmachine: (addons-892779)   </features>
	I1028 10:55:20.323301  141007 main.go:141] libmachine: (addons-892779)   <cpu mode='host-passthrough'>
	I1028 10:55:20.323308  141007 main.go:141] libmachine: (addons-892779)   
	I1028 10:55:20.323313  141007 main.go:141] libmachine: (addons-892779)   </cpu>
	I1028 10:55:20.323320  141007 main.go:141] libmachine: (addons-892779)   <os>
	I1028 10:55:20.323337  141007 main.go:141] libmachine: (addons-892779)     <type>hvm</type>
	I1028 10:55:20.323368  141007 main.go:141] libmachine: (addons-892779)     <boot dev='cdrom'/>
	I1028 10:55:20.323396  141007 main.go:141] libmachine: (addons-892779)     <boot dev='hd'/>
	I1028 10:55:20.323410  141007 main.go:141] libmachine: (addons-892779)     <bootmenu enable='no'/>
	I1028 10:55:20.323420  141007 main.go:141] libmachine: (addons-892779)   </os>
	I1028 10:55:20.323430  141007 main.go:141] libmachine: (addons-892779)   <devices>
	I1028 10:55:20.323441  141007 main.go:141] libmachine: (addons-892779)     <disk type='file' device='cdrom'>
	I1028 10:55:20.323465  141007 main.go:141] libmachine: (addons-892779)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/boot2docker.iso'/>
	I1028 10:55:20.323483  141007 main.go:141] libmachine: (addons-892779)       <target dev='hdc' bus='scsi'/>
	I1028 10:55:20.323495  141007 main.go:141] libmachine: (addons-892779)       <readonly/>
	I1028 10:55:20.323505  141007 main.go:141] libmachine: (addons-892779)     </disk>
	I1028 10:55:20.323515  141007 main.go:141] libmachine: (addons-892779)     <disk type='file' device='disk'>
	I1028 10:55:20.323528  141007 main.go:141] libmachine: (addons-892779)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 10:55:20.323544  141007 main.go:141] libmachine: (addons-892779)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/addons-892779.rawdisk'/>
	I1028 10:55:20.323555  141007 main.go:141] libmachine: (addons-892779)       <target dev='hda' bus='virtio'/>
	I1028 10:55:20.323576  141007 main.go:141] libmachine: (addons-892779)     </disk>
	I1028 10:55:20.323602  141007 main.go:141] libmachine: (addons-892779)     <interface type='network'>
	I1028 10:55:20.323613  141007 main.go:141] libmachine: (addons-892779)       <source network='mk-addons-892779'/>
	I1028 10:55:20.323620  141007 main.go:141] libmachine: (addons-892779)       <model type='virtio'/>
	I1028 10:55:20.323631  141007 main.go:141] libmachine: (addons-892779)     </interface>
	I1028 10:55:20.323646  141007 main.go:141] libmachine: (addons-892779)     <interface type='network'>
	I1028 10:55:20.323663  141007 main.go:141] libmachine: (addons-892779)       <source network='default'/>
	I1028 10:55:20.323674  141007 main.go:141] libmachine: (addons-892779)       <model type='virtio'/>
	I1028 10:55:20.323696  141007 main.go:141] libmachine: (addons-892779)     </interface>
	I1028 10:55:20.323706  141007 main.go:141] libmachine: (addons-892779)     <serial type='pty'>
	I1028 10:55:20.323714  141007 main.go:141] libmachine: (addons-892779)       <target port='0'/>
	I1028 10:55:20.323725  141007 main.go:141] libmachine: (addons-892779)     </serial>
	I1028 10:55:20.323737  141007 main.go:141] libmachine: (addons-892779)     <console type='pty'>
	I1028 10:55:20.323750  141007 main.go:141] libmachine: (addons-892779)       <target type='serial' port='0'/>
	I1028 10:55:20.323761  141007 main.go:141] libmachine: (addons-892779)     </console>
	I1028 10:55:20.323768  141007 main.go:141] libmachine: (addons-892779)     <rng model='virtio'>
	I1028 10:55:20.323849  141007 main.go:141] libmachine: (addons-892779)       <backend model='random'>/dev/random</backend>
	I1028 10:55:20.323879  141007 main.go:141] libmachine: (addons-892779)     </rng>
	I1028 10:55:20.323894  141007 main.go:141] libmachine: (addons-892779)     
	I1028 10:55:20.323903  141007 main.go:141] libmachine: (addons-892779)     
	I1028 10:55:20.323911  141007 main.go:141] libmachine: (addons-892779)   </devices>
	I1028 10:55:20.323921  141007 main.go:141] libmachine: (addons-892779) </domain>
	I1028 10:55:20.323932  141007 main.go:141] libmachine: (addons-892779) 
	I1028 10:55:20.328480  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:63:0d:71 in network default
	I1028 10:55:20.329082  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:20.329097  141007 main.go:141] libmachine: (addons-892779) Ensuring networks are active...
	I1028 10:55:20.329816  141007 main.go:141] libmachine: (addons-892779) Ensuring network default is active
	I1028 10:55:20.330156  141007 main.go:141] libmachine: (addons-892779) Ensuring network mk-addons-892779 is active
	I1028 10:55:20.330620  141007 main.go:141] libmachine: (addons-892779) Getting domain xml...
	I1028 10:55:20.331327  141007 main.go:141] libmachine: (addons-892779) Creating domain...
	I1028 10:55:21.556374  141007 main.go:141] libmachine: (addons-892779) Waiting to get IP...
	I1028 10:55:21.557046  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:21.557518  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:21.557575  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:21.557459  141029 retry.go:31] will retry after 300.808512ms: waiting for machine to come up
	I1028 10:55:21.860110  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:21.860725  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:21.860753  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:21.860687  141029 retry.go:31] will retry after 265.374853ms: waiting for machine to come up
	I1028 10:55:22.128294  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:22.128732  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:22.128754  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:22.128715  141029 retry.go:31] will retry after 428.941852ms: waiting for machine to come up
	I1028 10:55:22.559417  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:22.559864  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:22.559892  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:22.559804  141029 retry.go:31] will retry after 382.977845ms: waiting for machine to come up
	I1028 10:55:22.944439  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:22.944879  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:22.944906  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:22.944826  141029 retry.go:31] will retry after 464.717241ms: waiting for machine to come up
	I1028 10:55:23.411517  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:23.412060  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:23.412105  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:23.412003  141029 retry.go:31] will retry after 783.986977ms: waiting for machine to come up
	I1028 10:55:24.198089  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:24.198754  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:24.198778  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:24.198687  141029 retry.go:31] will retry after 893.564422ms: waiting for machine to come up
	I1028 10:55:25.094315  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:25.094658  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:25.094679  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:25.094621  141029 retry.go:31] will retry after 1.159093255s: waiting for machine to come up
	I1028 10:55:26.256081  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:26.256513  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:26.256536  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:26.256475  141029 retry.go:31] will retry after 1.171773821s: waiting for machine to come up
	I1028 10:55:27.429585  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:27.430183  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:27.430210  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:27.430147  141029 retry.go:31] will retry after 2.270421076s: waiting for machine to come up
	I1028 10:55:29.702478  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:29.702894  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:29.702927  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:29.702844  141029 retry.go:31] will retry after 2.482086728s: waiting for machine to come up
	I1028 10:55:32.188442  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:32.188906  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:32.188932  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:32.188826  141029 retry.go:31] will retry after 2.448291987s: waiting for machine to come up
	I1028 10:55:34.638905  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:34.639359  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:34.639383  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:34.639318  141029 retry.go:31] will retry after 3.063947725s: waiting for machine to come up
	I1028 10:55:37.704581  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:37.704986  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find current IP address of domain addons-892779 in network mk-addons-892779
	I1028 10:55:37.705009  141007 main.go:141] libmachine: (addons-892779) DBG | I1028 10:55:37.704960  141029 retry.go:31] will retry after 4.695382005s: waiting for machine to come up
	I1028 10:55:42.403938  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.404433  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has current primary IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.404489  141007 main.go:141] libmachine: (addons-892779) Found IP for machine: 192.168.39.106
	I1028 10:55:42.404515  141007 main.go:141] libmachine: (addons-892779) Reserving static IP address...
	I1028 10:55:42.404895  141007 main.go:141] libmachine: (addons-892779) DBG | unable to find host DHCP lease matching {name: "addons-892779", mac: "52:54:00:7b:e3:76", ip: "192.168.39.106"} in network mk-addons-892779
	I1028 10:55:42.483522  141007 main.go:141] libmachine: (addons-892779) DBG | Getting to WaitForSSH function...
	I1028 10:55:42.483557  141007 main.go:141] libmachine: (addons-892779) Reserved static IP address: 192.168.39.106
	I1028 10:55:42.483581  141007 main.go:141] libmachine: (addons-892779) Waiting for SSH to be available...
	I1028 10:55:42.486681  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.487120  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.487156  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.487327  141007 main.go:141] libmachine: (addons-892779) DBG | Using SSH client type: external
	I1028 10:55:42.487380  141007 main.go:141] libmachine: (addons-892779) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa (-rw-------)
	I1028 10:55:42.487445  141007 main.go:141] libmachine: (addons-892779) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 10:55:42.487468  141007 main.go:141] libmachine: (addons-892779) DBG | About to run SSH command:
	I1028 10:55:42.487487  141007 main.go:141] libmachine: (addons-892779) DBG | exit 0
	I1028 10:55:42.613718  141007 main.go:141] libmachine: (addons-892779) DBG | SSH cmd err, output: <nil>: 
	I1028 10:55:42.614088  141007 main.go:141] libmachine: (addons-892779) KVM machine creation complete!
	I1028 10:55:42.614409  141007 main.go:141] libmachine: (addons-892779) Calling .GetConfigRaw
	I1028 10:55:42.614956  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:42.615147  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:42.615275  141007 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 10:55:42.615291  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:55:42.617042  141007 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 10:55:42.617059  141007 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 10:55:42.617066  141007 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 10:55:42.617072  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.619675  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.620043  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.620068  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.620206  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.620365  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.620525  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.620656  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.620812  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.620997  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.621009  141007 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 10:55:42.725004  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 10:55:42.725034  141007 main.go:141] libmachine: Detecting the provisioner...
	I1028 10:55:42.725050  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.728062  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.728390  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.728418  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.728574  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.728754  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.728927  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.729059  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.729261  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.729426  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.729437  141007 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 10:55:42.834431  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 10:55:42.834550  141007 main.go:141] libmachine: found compatible host: buildroot
	I1028 10:55:42.834569  141007 main.go:141] libmachine: Provisioning with buildroot...
	I1028 10:55:42.834586  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:42.834868  141007 buildroot.go:166] provisioning hostname "addons-892779"
	I1028 10:55:42.834898  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:42.835108  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.837837  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.838192  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.838220  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.838383  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.838569  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.838735  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.838885  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.839040  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.839255  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.839271  141007 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-892779 && echo "addons-892779" | sudo tee /etc/hostname
	I1028 10:55:42.960598  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-892779
	
	I1028 10:55:42.960633  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:42.963564  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.963989  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:42.964013  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:42.964348  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:42.964496  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.964594  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:42.964742  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:42.964898  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:42.965083  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:42.965099  141007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-892779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-892779/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-892779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 10:55:43.083462  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 10:55:43.083508  141007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 10:55:43.083533  141007 buildroot.go:174] setting up certificates
	I1028 10:55:43.083546  141007 provision.go:84] configureAuth start
	I1028 10:55:43.083556  141007 main.go:141] libmachine: (addons-892779) Calling .GetMachineName
	I1028 10:55:43.083837  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:43.086572  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.086936  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.086963  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.087160  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.089377  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.089767  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.089796  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.089926  141007 provision.go:143] copyHostCerts
	I1028 10:55:43.089999  141007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 10:55:43.090157  141007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 10:55:43.090213  141007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 10:55:43.090259  141007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.addons-892779 san=[127.0.0.1 192.168.39.106 addons-892779 localhost minikube]
	I1028 10:55:43.228217  141007 provision.go:177] copyRemoteCerts
	I1028 10:55:43.228273  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 10:55:43.228295  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.231198  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.231519  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.231548  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.231749  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.231935  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.232061  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.232177  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.316364  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 10:55:43.342498  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 10:55:43.368125  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 10:55:43.393360  141007 provision.go:87] duration metric: took 309.798537ms to configureAuth
	I1028 10:55:43.393391  141007 buildroot.go:189] setting minikube options for container-runtime
	I1028 10:55:43.393582  141007 config.go:182] Loaded profile config "addons-892779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:55:43.393662  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.396695  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.397055  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.397091  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.397266  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.397482  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.397677  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.397848  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.397992  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:43.398151  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:43.398165  141007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 10:55:43.627324  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 10:55:43.627356  141007 main.go:141] libmachine: Checking connection to Docker...
	I1028 10:55:43.627365  141007 main.go:141] libmachine: (addons-892779) Calling .GetURL
	I1028 10:55:43.628845  141007 main.go:141] libmachine: (addons-892779) DBG | Using libvirt version 6000000
	I1028 10:55:43.631450  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.631854  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.631890  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.632102  141007 main.go:141] libmachine: Docker is up and running!
	I1028 10:55:43.632118  141007 main.go:141] libmachine: Reticulating splines...
	I1028 10:55:43.632128  141007 client.go:171] duration metric: took 24.201788801s to LocalClient.Create
	I1028 10:55:43.632154  141007 start.go:167] duration metric: took 24.201861716s to libmachine.API.Create "addons-892779"
	I1028 10:55:43.632177  141007 start.go:293] postStartSetup for "addons-892779" (driver="kvm2")
	I1028 10:55:43.632193  141007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 10:55:43.632220  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.632473  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 10:55:43.632498  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.634808  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.635189  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.635205  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.635418  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.635638  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.635783  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.635912  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.720544  141007 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 10:55:43.724808  141007 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 10:55:43.724850  141007 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 10:55:43.724951  141007 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 10:55:43.724986  141007 start.go:296] duration metric: took 92.799351ms for postStartSetup
	I1028 10:55:43.725021  141007 main.go:141] libmachine: (addons-892779) Calling .GetConfigRaw
	I1028 10:55:43.725608  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:43.728028  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.728385  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.728414  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.728607  141007 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/config.json ...
	I1028 10:55:43.728821  141007 start.go:128] duration metric: took 24.318415865s to createHost
	I1028 10:55:43.728851  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.731173  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.731546  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.731580  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.731726  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.731914  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.732155  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.732326  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.732518  141007 main.go:141] libmachine: Using SSH client type: native
	I1028 10:55:43.732682  141007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1028 10:55:43.732693  141007 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 10:55:43.838396  141007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730112943.809346361
	
	I1028 10:55:43.838424  141007 fix.go:216] guest clock: 1730112943.809346361
	I1028 10:55:43.838433  141007 fix.go:229] Guest: 2024-10-28 10:55:43.809346361 +0000 UTC Remote: 2024-10-28 10:55:43.72883726 +0000 UTC m=+24.427622117 (delta=80.509101ms)
	I1028 10:55:43.838484  141007 fix.go:200] guest clock delta is within tolerance: 80.509101ms
	I1028 10:55:43.838492  141007 start.go:83] releasing machines lock for "addons-892779", held for 24.428166535s
	I1028 10:55:43.838521  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.838786  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:43.841838  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.842278  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.842307  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.842464  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.843023  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.843196  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:55:43.843284  141007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 10:55:43.843346  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.843387  141007 ssh_runner.go:195] Run: cat /version.json
	I1028 10:55:43.843416  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:55:43.846296  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846325  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846651  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.846681  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846715  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:43.846731  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:43.846839  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.846931  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:55:43.847024  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.847091  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:55:43.847162  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.847220  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:55:43.847286  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.847318  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:55:43.951281  141007 ssh_runner.go:195] Run: systemctl --version
	I1028 10:55:43.958027  141007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 10:55:44.121177  141007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 10:55:44.128479  141007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 10:55:44.128560  141007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 10:55:44.147474  141007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 10:55:44.147502  141007 start.go:495] detecting cgroup driver to use...
	I1028 10:55:44.147570  141007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 10:55:44.164142  141007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 10:55:44.179618  141007 docker.go:217] disabling cri-docker service (if available) ...
	I1028 10:55:44.179681  141007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 10:55:44.194807  141007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 10:55:44.209829  141007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 10:55:44.322617  141007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 10:55:44.457091  141007 docker.go:233] disabling docker service ...
	I1028 10:55:44.457169  141007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 10:55:44.472608  141007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 10:55:44.486472  141007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 10:55:44.620106  141007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 10:55:44.748714  141007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 10:55:44.763436  141007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 10:55:44.782711  141007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 10:55:44.782768  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.793825  141007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 10:55:44.793892  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.805075  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.816243  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.827490  141007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 10:55:44.839005  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.850290  141007 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.868242  141007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 10:55:44.879209  141007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 10:55:44.888944  141007 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 10:55:44.889002  141007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 10:55:44.908562  141007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 10:55:44.922885  141007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:55:45.031729  141007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 10:55:45.128849  141007 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 10:55:45.128941  141007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 10:55:45.134025  141007 start.go:563] Will wait 60s for crictl version
	I1028 10:55:45.134102  141007 ssh_runner.go:195] Run: which crictl
	I1028 10:55:45.138032  141007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 10:55:45.181652  141007 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 10:55:45.181770  141007 ssh_runner.go:195] Run: crio --version
	I1028 10:55:45.211427  141007 ssh_runner.go:195] Run: crio --version
	I1028 10:55:45.242954  141007 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 10:55:45.244330  141007 main.go:141] libmachine: (addons-892779) Calling .GetIP
	I1028 10:55:45.247038  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:45.247361  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:55:45.247387  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:55:45.247584  141007 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 10:55:45.252064  141007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:55:45.265334  141007 kubeadm.go:883] updating cluster {Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 10:55:45.265447  141007 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:55:45.265494  141007 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:55:45.303366  141007 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 10:55:45.303436  141007 ssh_runner.go:195] Run: which lz4
	I1028 10:55:45.308074  141007 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 10:55:45.312561  141007 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 10:55:45.312596  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 10:55:46.767713  141007 crio.go:462] duration metric: took 1.45968553s to copy over tarball
	I1028 10:55:46.767797  141007 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 10:55:49.064182  141007 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.296357601s)
	I1028 10:55:49.064216  141007 crio.go:469] duration metric: took 2.296466387s to extract the tarball
	I1028 10:55:49.064224  141007 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 10:55:49.105000  141007 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 10:55:49.156410  141007 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 10:55:49.156437  141007 cache_images.go:84] Images are preloaded, skipping loading
	I1028 10:55:49.156445  141007 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.2 crio true true} ...
	I1028 10:55:49.156547  141007 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-892779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 10:55:49.156610  141007 ssh_runner.go:195] Run: crio config
	I1028 10:55:49.215773  141007 cni.go:84] Creating CNI manager for ""
	I1028 10:55:49.215799  141007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:55:49.215810  141007 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 10:55:49.215832  141007 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-892779 NodeName:addons-892779 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 10:55:49.215947  141007 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-892779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 10:55:49.216005  141007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 10:55:49.226945  141007 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 10:55:49.227007  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 10:55:49.238852  141007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1028 10:55:49.258237  141007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 10:55:49.276761  141007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1028 10:55:49.294732  141007 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1028 10:55:49.298946  141007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 10:55:49.312232  141007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:55:49.447632  141007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 10:55:49.466005  141007 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779 for IP: 192.168.39.106
	I1028 10:55:49.466038  141007 certs.go:194] generating shared ca certs ...
	I1028 10:55:49.466057  141007 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.466212  141007 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 10:55:49.603469  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt ...
	I1028 10:55:49.603501  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt: {Name:mk054550a0fe354b3c02d1432ba9351dced683bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.603696  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key ...
	I1028 10:55:49.603711  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key: {Name:mk4b7477e3761da1d78e3e4f1c6e0daa874a67de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.603812  141007 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 10:55:49.698175  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt ...
	I1028 10:55:49.698209  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt: {Name:mk7e92ecf4d6400b107409be7619010de2dda2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.698404  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key ...
	I1028 10:55:49.698421  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key: {Name:mk44c6e5638cfda241a2bee5cb00c19511e2a30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.698523  141007 certs.go:256] generating profile certs ...
	I1028 10:55:49.698597  141007 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.key
	I1028 10:55:49.698616  141007 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt with IP's: []
	I1028 10:55:49.750900  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt ...
	I1028 10:55:49.750935  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: {Name:mkdd92ed1d1be6dff715d84b590f28bd5d2a2d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.751140  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.key ...
	I1028 10:55:49.751158  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.key: {Name:mkd3bde6b0f0846cbc5a6d4d432825ecb16c07bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:49.751294  141007 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d
	I1028 10:55:49.751320  141007 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.106]
	I1028 10:55:50.104817  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d ...
	I1028 10:55:50.104853  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d: {Name:mk295c55c16fbdb7a6141ddaa94a647e76e2e0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.105053  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d ...
	I1028 10:55:50.105072  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d: {Name:mkc520591f57fd9b7ad5872b707ae9ee59a38bcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.105175  141007 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt.33f8117d -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt
	I1028 10:55:50.105273  141007 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key.33f8117d -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key
	I1028 10:55:50.105347  141007 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key
	I1028 10:55:50.105375  141007 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt with IP's: []
	I1028 10:55:50.208201  141007 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt ...
	I1028 10:55:50.208236  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt: {Name:mkc2e85fe6e63b2edfeaa492eb26b69df346de19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.208431  141007 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key ...
	I1028 10:55:50.208451  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key: {Name:mk1318316227f112d5da9f267b5e8c039e4f2824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:55:50.208688  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 10:55:50.208735  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 10:55:50.208766  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 10:55:50.208801  141007 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 10:55:50.209454  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 10:55:50.239104  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 10:55:50.272308  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 10:55:50.299244  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 10:55:50.326103  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 10:55:50.352994  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 10:55:50.379724  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 10:55:50.406935  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 10:55:50.433319  141007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 10:55:50.459198  141007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 10:55:50.478007  141007 ssh_runner.go:195] Run: openssl version
	I1028 10:55:50.484628  141007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 10:55:50.496702  141007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:55:50.502386  141007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:55:50.502460  141007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 10:55:50.508903  141007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 10:55:50.520803  141007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 10:55:50.525462  141007 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 10:55:50.525519  141007 kubeadm.go:392] StartCluster: {Name:addons-892779 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-892779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:55:50.525627  141007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 10:55:50.525715  141007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 10:55:50.569043  141007 cri.go:89] found id: ""
	I1028 10:55:50.569120  141007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 10:55:50.579506  141007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 10:55:50.589812  141007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 10:55:50.599728  141007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 10:55:50.599750  141007 kubeadm.go:157] found existing configuration files:
	
	I1028 10:55:50.599793  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 10:55:50.609750  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 10:55:50.609839  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 10:55:50.620345  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 10:55:50.630257  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 10:55:50.630319  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 10:55:50.640199  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 10:55:50.649294  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 10:55:50.649367  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 10:55:50.659991  141007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 10:55:50.669502  141007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 10:55:50.669581  141007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 10:55:50.680184  141007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 10:55:50.870066  141007 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 10:56:00.524626  141007 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 10:56:00.524738  141007 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 10:56:00.524847  141007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 10:56:00.525002  141007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 10:56:00.525131  141007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 10:56:00.525219  141007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 10:56:00.526778  141007 out.go:235]   - Generating certificates and keys ...
	I1028 10:56:00.526873  141007 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 10:56:00.526963  141007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 10:56:00.527049  141007 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 10:56:00.527114  141007 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 10:56:00.527189  141007 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 10:56:00.527276  141007 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 10:56:00.527349  141007 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 10:56:00.527507  141007 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-892779 localhost] and IPs [192.168.39.106 127.0.0.1 ::1]
	I1028 10:56:00.527593  141007 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 10:56:00.527741  141007 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-892779 localhost] and IPs [192.168.39.106 127.0.0.1 ::1]
	I1028 10:56:00.527843  141007 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 10:56:00.528019  141007 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 10:56:00.528087  141007 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 10:56:00.528158  141007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 10:56:00.528239  141007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 10:56:00.528321  141007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 10:56:00.528415  141007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 10:56:00.528502  141007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 10:56:00.528553  141007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 10:56:00.528620  141007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 10:56:00.528688  141007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 10:56:00.530247  141007 out.go:235]   - Booting up control plane ...
	I1028 10:56:00.530345  141007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 10:56:00.530419  141007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 10:56:00.530481  141007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 10:56:00.530579  141007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 10:56:00.530673  141007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 10:56:00.530712  141007 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 10:56:00.530822  141007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 10:56:00.530924  141007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 10:56:00.530974  141007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092577ms
	I1028 10:56:00.531050  141007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 10:56:00.531099  141007 kubeadm.go:310] [api-check] The API server is healthy after 5.502361567s
	I1028 10:56:00.531190  141007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 10:56:00.531299  141007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 10:56:00.531356  141007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 10:56:00.531559  141007 kubeadm.go:310] [mark-control-plane] Marking the node addons-892779 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 10:56:00.531626  141007 kubeadm.go:310] [bootstrap-token] Using token: h4n5ke.6v6qoasogb607car
	I1028 10:56:00.533320  141007 out.go:235]   - Configuring RBAC rules ...
	I1028 10:56:00.533455  141007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 10:56:00.533581  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 10:56:00.533773  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 10:56:00.533896  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 10:56:00.534001  141007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 10:56:00.534078  141007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 10:56:00.534176  141007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 10:56:00.534215  141007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 10:56:00.534262  141007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 10:56:00.534268  141007 kubeadm.go:310] 
	I1028 10:56:00.534317  141007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 10:56:00.534326  141007 kubeadm.go:310] 
	I1028 10:56:00.534399  141007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 10:56:00.534405  141007 kubeadm.go:310] 
	I1028 10:56:00.534425  141007 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 10:56:00.534511  141007 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 10:56:00.534595  141007 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 10:56:00.534610  141007 kubeadm.go:310] 
	I1028 10:56:00.534689  141007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 10:56:00.534698  141007 kubeadm.go:310] 
	I1028 10:56:00.534765  141007 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 10:56:00.534774  141007 kubeadm.go:310] 
	I1028 10:56:00.534850  141007 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 10:56:00.534955  141007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 10:56:00.535056  141007 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 10:56:00.535062  141007 kubeadm.go:310] 
	I1028 10:56:00.535133  141007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 10:56:00.535205  141007 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 10:56:00.535211  141007 kubeadm.go:310] 
	I1028 10:56:00.535289  141007 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h4n5ke.6v6qoasogb607car \
	I1028 10:56:00.535378  141007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 10:56:00.535398  141007 kubeadm.go:310] 	--control-plane 
	I1028 10:56:00.535403  141007 kubeadm.go:310] 
	I1028 10:56:00.535471  141007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 10:56:00.535477  141007 kubeadm.go:310] 
	I1028 10:56:00.535555  141007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h4n5ke.6v6qoasogb607car \
	I1028 10:56:00.535670  141007 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 10:56:00.535686  141007 cni.go:84] Creating CNI manager for ""
	I1028 10:56:00.535696  141007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:56:00.537466  141007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 10:56:00.539046  141007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 10:56:00.550808  141007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 10:56:00.578183  141007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 10:56:00.578338  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-892779 minikube.k8s.io/updated_at=2024_10_28T10_56_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=addons-892779 minikube.k8s.io/primary=true
	I1028 10:56:00.578344  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:00.592539  141007 ops.go:34] apiserver oom_adj: -16
	I1028 10:56:00.751656  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:01.251749  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:01.751868  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:02.252743  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:02.752103  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:03.252487  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:03.751837  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:04.251771  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:04.751987  141007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 10:56:04.898615  141007 kubeadm.go:1113] duration metric: took 4.320420995s to wait for elevateKubeSystemPrivileges
	I1028 10:56:04.898659  141007 kubeadm.go:394] duration metric: took 14.373143469s to StartCluster
	I1028 10:56:04.898682  141007 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:56:04.898813  141007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 10:56:04.899156  141007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 10:56:04.899386  141007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 10:56:04.899404  141007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 10:56:04.899472  141007 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1028 10:56:04.899607  141007 addons.go:69] Setting yakd=true in profile "addons-892779"
	I1028 10:56:04.899628  141007 addons.go:69] Setting default-storageclass=true in profile "addons-892779"
	I1028 10:56:04.899624  141007 addons.go:69] Setting inspektor-gadget=true in profile "addons-892779"
	I1028 10:56:04.899642  141007 addons.go:69] Setting metrics-server=true in profile "addons-892779"
	I1028 10:56:04.899648  141007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-892779"
	I1028 10:56:04.899652  141007 addons.go:234] Setting addon inspektor-gadget=true in "addons-892779"
	I1028 10:56:04.899659  141007 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-892779"
	I1028 10:56:04.899672  141007 addons.go:69] Setting gcp-auth=true in profile "addons-892779"
	I1028 10:56:04.899685  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899689  141007 addons.go:69] Setting volcano=true in profile "addons-892779"
	I1028 10:56:04.899696  141007 mustload.go:65] Loading cluster: addons-892779
	I1028 10:56:04.899699  141007 addons.go:234] Setting addon volcano=true in "addons-892779"
	I1028 10:56:04.899700  141007 config.go:182] Loaded profile config "addons-892779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:56:04.899697  141007 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-892779"
	I1028 10:56:04.899747  141007 addons.go:69] Setting storage-provisioner=true in profile "addons-892779"
	I1028 10:56:04.899779  141007 addons.go:69] Setting ingress-dns=true in profile "addons-892779"
	I1028 10:56:04.899799  141007 addons.go:234] Setting addon ingress-dns=true in "addons-892779"
	I1028 10:56:04.899652  141007 addons.go:234] Setting addon metrics-server=true in "addons-892779"
	I1028 10:56:04.899825  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899854  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899897  141007 config.go:182] Loaded profile config "addons-892779": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 10:56:04.900161  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900171  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900200  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900205  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900268  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900291  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900314  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.899730  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.900345  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900357  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.900322  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899785  141007 addons.go:234] Setting addon storage-provisioner=true in "addons-892779"
	I1028 10:56:04.900623  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.900674  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.900703  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899652  141007 addons.go:69] Setting ingress=true in profile "addons-892779"
	I1028 10:56:04.900845  141007 addons.go:234] Setting addon ingress=true in "addons-892779"
	I1028 10:56:04.900883  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899752  141007 addons.go:69] Setting cloud-spanner=true in profile "addons-892779"
	I1028 10:56:04.900994  141007 addons.go:234] Setting addon cloud-spanner=true in "addons-892779"
	I1028 10:56:04.901003  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.901021  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.901032  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899738  141007 addons.go:69] Setting volumesnapshots=true in profile "addons-892779"
	I1028 10:56:04.901237  141007 addons.go:234] Setting addon volumesnapshots=true in "addons-892779"
	I1028 10:56:04.901264  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.901281  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.901297  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899678  141007 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-892779"
	I1028 10:56:04.901377  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.901421  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.901455  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899760  141007 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-892779"
	I1028 10:56:04.902263  141007 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-892779"
	I1028 10:56:04.902295  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.899763  141007 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-892779"
	I1028 10:56:04.902691  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.902731  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899770  141007 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-892779"
	I1028 10:56:04.903098  141007 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-892779"
	I1028 10:56:04.903134  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.903165  141007 out.go:177] * Verifying Kubernetes components...
	I1028 10:56:04.899633  141007 addons.go:234] Setting addon yakd=true in "addons-892779"
	I1028 10:56:04.903320  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.903500  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.903529  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.903657  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.903681  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.899742  141007 addons.go:69] Setting registry=true in profile "addons-892779"
	I1028 10:56:04.903851  141007 addons.go:234] Setting addon registry=true in "addons-892779"
	I1028 10:56:04.903889  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.904939  141007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 10:56:04.922302  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37649
	I1028 10:56:04.922998  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.923147  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I1028 10:56:04.924156  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924196  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.924422  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924462  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1028 10:56:04.924476  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.924543  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924564  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.924575  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.924613  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.925012  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I1028 10:56:04.925190  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I1028 10:56:04.925306  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.925496  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.925509  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.925852  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.925919  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.926014  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.926021  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.926076  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.926599  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.926618  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.926673  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.926716  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.926825  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.926836  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.927240  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.927274  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.927614  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I1028 10:56:04.927778  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.928129  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.928150  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.930075  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.930118  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.939058  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.939126  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.939150  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.939216  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.939531  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.940395  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.940421  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.940503  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.941376  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.941875  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.941916  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.942317  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.942361  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.942687  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.942731  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.945847  141007 addons.go:234] Setting addon default-storageclass=true in "addons-892779"
	I1028 10:56:04.945899  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.946337  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.946494  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.947715  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I1028 10:56:04.948308  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.948894  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.948911  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.949325  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.949762  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.949787  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.956931  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I1028 10:56:04.957615  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.958375  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.958395  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.958838  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.959612  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.959786  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.962949  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I1028 10:56:04.962990  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I1028 10:56:04.963399  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.963501  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.963904  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.963924  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.964080  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.964092  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.964503  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.964519  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.964565  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I1028 10:56:04.965132  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.965174  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.965693  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.966244  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.966266  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.966650  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.970721  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I1028 10:56:04.971965  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I1028 10:56:04.976189  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1028 10:56:04.976210  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I1028 10:56:04.976879  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1028 10:56:04.977251  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I1028 10:56:04.977722  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.978269  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.978292  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.978734  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.978912  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.982293  141007 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-892779"
	I1028 10:56:04.982351  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:04.982715  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.982753  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.982763  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.982790  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.983293  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.983337  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.983868  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I1028 10:56:04.983969  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984024  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I1028 10:56:04.984158  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984260  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984443  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984641  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.984662  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.984763  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I1028 10:56:04.984818  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.984840  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.984910  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.984981  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.984992  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.985453  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.985537  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.985549  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.985565  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.985908  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.985923  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.986084  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.986094  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.986143  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.986321  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.986333  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.986390  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.987121  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.987158  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:04.987376  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.987397  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.987465  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.987509  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.987708  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.987773  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.989234  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:04.989574  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.989595  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.989983  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.990129  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:04.990142  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:04.990202  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.990608  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.990817  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.991736  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.992116  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:04.992281  141007 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1028 10:56:04.992589  141007 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1028 10:56:04.992967  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:04.993216  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.993859  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:04.993876  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:04.993990  141007 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 10:56:04.994007  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1028 10:56:04.994026  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:04.994693  141007 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 10:56:04.994708  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1028 10:56:04.994726  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:04.994862  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:04.994903  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:04.994910  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:04.994917  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:04.994923  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:04.997313  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:04.997894  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:04.999170  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.000904  141007 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1028 10:56:05.001132  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.001340  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.002088  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.002131  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.002159  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.002171  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.002369  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.002436  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.002525  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:05.002538  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:05.002586  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	W1028 10:56:05.002641  141007 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1028 10:56:05.002727  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.002855  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.003158  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 10:56:05.003171  141007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 10:56:05.003189  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.003250  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.004432  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.004608  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.007666  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.008275  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.008304  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.008538  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.008768  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.008940  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.009076  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.018365  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1028 10:56:05.018775  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.019551  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.019577  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.019957  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.020037  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I1028 10:56:05.020232  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I1028 10:56:05.020336  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.020830  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.022003  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.023087  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.023643  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.023664  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.024107  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.024126  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.024542  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.024593  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.025149  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:05.025192  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.025485  141007 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1028 10:56:05.025674  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.027228  141007 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 10:56:05.027256  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1028 10:56:05.027281  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.028535  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.030376  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1028 10:56:05.031198  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I1028 10:56:05.031221  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.032838  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1028 10:56:05.033366  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34805
	I1028 10:56:05.033840  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.033950  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.033978  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.034163  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.034330  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.034476  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.034534  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1028 10:56:05.034818  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.035001  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.035017  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.035088  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.035163  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.035735  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1028 10:56:05.036032  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.036061  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.036220  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I1028 10:56:05.036359  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.036502  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.037065  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.037217  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.037521  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I1028 10:56:05.038158  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.038179  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.038537  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1028 10:56:05.038595  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.038800  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.039007  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.039067  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.039681  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.039906  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.040517  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.040538  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.040845  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.040863  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.041515  141007 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1028 10:56:05.041547  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.041521  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.041626  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1028 10:56:05.041695  141007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 10:56:05.041787  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.042428  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1028 10:56:05.042738  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.043206  141007 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1028 10:56:05.043235  141007 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1028 10:56:05.043239  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.043787  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.043807  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.043308  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.043356  141007 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 10:56:05.043895  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 10:56:05.043910  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.043978  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.044150  141007 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1028 10:56:05.044591  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.044347  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:05.044664  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.044894  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1028 10:56:05.045484  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:05.045558  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:05.046228  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I1028 10:56:05.047280  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.047307  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 10:56:05.047284  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I1028 10:56:05.047452  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1028 10:56:05.047465  141007 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1028 10:56:05.047483  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.047830  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.048258  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.048281  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.048343  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.048588  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1028 10:56:05.048918  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.048963  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.049113  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.049126  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.049187  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.049297  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.049316  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.049466  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.049487  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.049661  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.049724  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.050127  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.050144  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.050317  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.050479  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.050656  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.050854  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.051148  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.051327  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.052171  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.052236  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.052575  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.052857  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.052885  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.053046  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.053186  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 10:56:05.053236  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.053946  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.054074  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.054297  141007 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1028 10:56:05.054301  141007 out.go:177]   - Using image docker.io/registry:2.8.3
	I1028 10:56:05.055882  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1028 10:56:05.056225  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.056407  141007 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1028 10:56:05.056429  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1028 10:56:05.056446  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.056544  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1028 10:56:05.056653  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.056669  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.056938  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.057183  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.058117  141007 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1028 10:56:05.058141  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1028 10:56:05.058498  141007 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 10:56:05.058515  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1028 10:56:05.058533  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.059428  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.060118  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1028 10:56:05.060138  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1028 10:56:05.060161  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.060174  141007 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1028 10:56:05.060187  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1028 10:56:05.060204  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.060354  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.060844  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.060884  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.061121  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.061350  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.061508  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.061746  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.062049  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.062080  141007 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1028 10:56:05.062549  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.062568  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.062726  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.062940  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.063106  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.063262  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.063511  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1028 10:56:05.063531  141007 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1028 10:56:05.063548  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.064639  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.065052  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.065074  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.065387  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.065653  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.065812  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.065915  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.066895  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.067349  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.067367  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.067560  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.067774  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.067835  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.068062  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.068202  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.068226  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.068259  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.068532  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.068705  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.068841  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.068964  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	W1028 10:56:05.069717  141007 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39100->192.168.39.106:22: read: connection reset by peer
	I1028 10:56:05.069746  141007 retry.go:31] will retry after 245.097269ms: ssh: handshake failed: read tcp 192.168.39.1:39100->192.168.39.106:22: read: connection reset by peer
	I1028 10:56:05.073068  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I1028 10:56:05.073448  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.073890  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.073905  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.074180  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.074338  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.075816  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.077862  141007 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1028 10:56:05.079584  141007 out.go:177]   - Using image docker.io/busybox:stable
	I1028 10:56:05.081127  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I1028 10:56:05.081297  141007 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 10:56:05.081317  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1028 10:56:05.081335  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.081704  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:05.082762  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:05.082786  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:05.083270  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:05.083786  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:05.084683  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.085087  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.085121  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.085240  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.085409  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.085552  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:05.085597  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.085721  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.085732  141007 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 10:56:05.085961  141007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 10:56:05.085983  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:05.089056  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.089433  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:05.089451  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:05.089671  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:05.089856  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:05.090056  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:05.090195  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:05.494484  141007 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1028 10:56:05.494514  141007 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1028 10:56:05.588140  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 10:56:05.607572  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1028 10:56:05.607598  141007 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1028 10:56:05.626542  141007 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1028 10:56:05.626572  141007 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1028 10:56:05.631247  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1028 10:56:05.633946  141007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 10:56:05.634001  141007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 10:56:05.638847  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1028 10:56:05.657282  141007 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1028 10:56:05.657311  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1028 10:56:05.662039  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1028 10:56:05.669195  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 10:56:05.669226  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1028 10:56:05.680679  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1028 10:56:05.682878  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1028 10:56:05.704388  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 10:56:05.746944  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1028 10:56:05.747167  141007 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1028 10:56:05.747189  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1028 10:56:05.798147  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1028 10:56:05.798178  141007 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1028 10:56:05.908680  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1028 10:56:05.908705  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1028 10:56:05.916166  141007 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1028 10:56:05.916191  141007 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1028 10:56:05.944477  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 10:56:05.944509  141007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 10:56:05.953661  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1028 10:56:06.056040  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1028 10:56:06.073011  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1028 10:56:06.073045  141007 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1028 10:56:06.148319  141007 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1028 10:56:06.148347  141007 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1028 10:56:06.168180  141007 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 10:56:06.168210  141007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 10:56:06.266092  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1028 10:56:06.266123  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1028 10:56:06.421030  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1028 10:56:06.421063  141007 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1028 10:56:06.453749  141007 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1028 10:56:06.453783  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1028 10:56:06.534935  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 10:56:06.558395  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1028 10:56:06.558424  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1028 10:56:06.755468  141007 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 10:56:06.755493  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1028 10:56:06.762827  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1028 10:56:06.846776  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1028 10:56:06.846818  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1028 10:56:07.196882  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 10:56:07.275362  141007 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1028 10:56:07.275397  141007 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1028 10:56:07.340954  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.75277225s)
	I1028 10:56:07.341008  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:07.341020  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:07.341361  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:07.341385  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:07.341397  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:07.341407  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:07.341665  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:07.341728  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:07.341687  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:07.350705  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:07.350727  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:07.350986  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:07.351007  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:07.637863  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1028 10:56:07.637892  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1028 10:56:07.974684  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1028 10:56:07.974723  141007 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1028 10:56:08.213334  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1028 10:56:08.213369  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1028 10:56:08.390234  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1028 10:56:08.390272  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1028 10:56:08.743637  141007 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 10:56:08.743670  141007 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1028 10:56:09.213168  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1028 10:56:09.955562  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.324274381s)
	I1028 10:56:09.955569  141007 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.32158678s)
	I1028 10:56:09.955631  141007 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.321602155s)
	I1028 10:56:09.955678  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.316806661s)
	I1028 10:56:09.955744  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.275036169s)
	I1028 10:56:09.955683  141007 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 10:56:09.955769  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.955779  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.955748  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.955903  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.955642  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.955965  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.955717  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.293647969s)
	I1028 10:56:09.956219  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.956230  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.956891  141007 node_ready.go:35] waiting up to 6m0s for node "addons-892779" to be "Ready" ...
	I1028 10:56:09.957101  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957122  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957120  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957136  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957144  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957147  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957151  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957154  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957161  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957160  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957167  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957192  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957200  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957208  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957215  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957253  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957275  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957281  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957289  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:09.957295  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:09.957511  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957545  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:09.957575  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957580  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.957606  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.957614  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.958275  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.958287  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.958528  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:09.958538  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:09.987611  141007 node_ready.go:49] node "addons-892779" has status "Ready":"True"
	I1028 10:56:09.987645  141007 node_ready.go:38] duration metric: took 30.731126ms for node "addons-892779" to be "Ready" ...
	I1028 10:56:09.987658  141007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 10:56:10.077186  141007 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:10.511739  141007 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-892779" context rescaled to 1 replicas
	I1028 10:56:12.038921  141007 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1028 10:56:12.038971  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:12.042346  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.042916  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:12.042953  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.043143  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:12.043382  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:12.043581  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:12.043746  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:12.105418  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:12.619990  141007 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1028 10:56:12.782618  141007 addons.go:234] Setting addon gcp-auth=true in "addons-892779"
	I1028 10:56:12.782689  141007 host.go:66] Checking if "addons-892779" exists ...
	I1028 10:56:12.783381  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:12.783461  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:12.799289  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I1028 10:56:12.799834  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:12.800329  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:12.800351  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:12.800705  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:12.801294  141007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 10:56:12.801343  141007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 10:56:12.817122  141007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1028 10:56:12.817645  141007 main.go:141] libmachine: () Calling .GetVersion
	I1028 10:56:12.818194  141007 main.go:141] libmachine: Using API Version  1
	I1028 10:56:12.818219  141007 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 10:56:12.818568  141007 main.go:141] libmachine: () Calling .GetMachineName
	I1028 10:56:12.818760  141007 main.go:141] libmachine: (addons-892779) Calling .GetState
	I1028 10:56:12.820463  141007 main.go:141] libmachine: (addons-892779) Calling .DriverName
	I1028 10:56:12.820708  141007 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1028 10:56:12.820739  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHHostname
	I1028 10:56:12.823309  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.823655  141007 main.go:141] libmachine: (addons-892779) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:e3:76", ip: ""} in network mk-addons-892779: {Iface:virbr1 ExpiryTime:2024-10-28 11:55:35 +0000 UTC Type:0 Mac:52:54:00:7b:e3:76 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:addons-892779 Clientid:01:52:54:00:7b:e3:76}
	I1028 10:56:12.823684  141007 main.go:141] libmachine: (addons-892779) DBG | domain addons-892779 has defined IP address 192.168.39.106 and MAC address 52:54:00:7b:e3:76 in network mk-addons-892779
	I1028 10:56:12.823825  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHPort
	I1028 10:56:12.824004  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHKeyPath
	I1028 10:56:12.824159  141007 main.go:141] libmachine: (addons-892779) Calling .GetSSHUsername
	I1028 10:56:12.824275  141007 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/addons-892779/id_rsa Username:docker}
	I1028 10:56:14.142461  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:14.182020  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.49910956s)
	I1028 10:56:14.182067  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182076  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182173  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.477748281s)
	I1028 10:56:14.182224  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182235  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182336  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.435360522s)
	I1028 10:56:14.182343  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.182367  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182378  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182391  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182415  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.182425  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182433  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182493  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.228783662s)
	I1028 10:56:14.182529  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182541  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182549  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.126472728s)
	I1028 10:56:14.182580  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182590  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182640  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182659  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.182676  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.182686  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182693  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182733  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.647738425s)
	I1028 10:56:14.182800  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182832  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182834  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.419954937s)
	I1028 10:56:14.182862  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182875  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.182904  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182925  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.182972  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.182980  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.182988  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.182997  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.183015  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.986100551s)
	W1028 10:56:14.183041  141007 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 10:56:14.183087  141007 retry.go:31] will retry after 360.732586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1028 10:56:14.183111  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183121  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183131  141007 addons.go:475] Verifying addon ingress=true in "addons-892779"
	I1028 10:56:14.183132  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.183159  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183166  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183174  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.183181  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.183268  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183284  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183328  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.183338  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.183346  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.183352  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.184727  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.184755  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.184762  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.184770  141007 addons.go:475] Verifying addon metrics-server=true in "addons-892779"
	I1028 10:56:14.185005  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.185025  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.185047  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.185054  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.185230  141007 out.go:177] * Verifying ingress addon...
	I1028 10:56:14.185646  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.185679  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.185685  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.185693  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.185700  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.183244  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186376  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186404  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186410  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.186477  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186490  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.186500  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.186508  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.186616  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186640  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186646  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.186656  141007 addons.go:475] Verifying addon registry=true in "addons-892779"
	I1028 10:56:14.186926  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:14.186957  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.186965  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.187844  141007 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1028 10:56:14.188578  141007 out.go:177] * Verifying registry addon...
	I1028 10:56:14.188584  141007 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-892779 service yakd-dashboard -n yakd-dashboard
	
	I1028 10:56:14.190636  141007 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1028 10:56:14.194468  141007 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1028 10:56:14.194489  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:14.200974  141007 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1028 10:56:14.201005  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:14.241040  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:14.241064  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:14.241338  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:14.241358  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:14.545072  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1028 10:56:14.946180  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:14.948674  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:15.205785  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:15.207435  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:15.714816  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:15.731428  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:15.766546  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.553325605s)
	I1028 10:56:15.766617  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:15.766634  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:15.766636  141007 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.94589861s)
	I1028 10:56:15.766912  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:15.766980  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:15.766998  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:15.767011  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:15.767285  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:15.767345  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:15.767360  141007 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-892779"
	I1028 10:56:15.767321  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:15.768355  141007 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1028 10:56:15.769147  141007 out.go:177] * Verifying csi-hostpath-driver addon...
	I1028 10:56:15.770739  141007 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1028 10:56:15.771806  141007 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1028 10:56:15.772097  141007 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1028 10:56:15.772115  141007 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1028 10:56:15.798939  141007 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1028 10:56:15.798964  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:15.839144  141007 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1028 10:56:15.839179  141007 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1028 10:56:15.960754  141007 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 10:56:15.960783  141007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1028 10:56:16.064525  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.519337351s)
	I1028 10:56:16.064606  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:16.064634  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:16.064947  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:16.064968  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:16.064978  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:16.064986  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:16.065195  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:16.065284  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:16.065263  141007 main.go:141] libmachine: (addons-892779) DBG | Closing plugin on server side
	I1028 10:56:16.067863  141007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1028 10:56:16.196466  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:16.196656  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:16.276748  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:16.584459  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:16.694343  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:16.699045  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:16.778772  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:17.217312  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:17.217778  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:17.307497  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:17.571961  141007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.50404897s)
	I1028 10:56:17.572026  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:17.572044  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:17.572336  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:17.572356  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:17.572370  141007 main.go:141] libmachine: Making call to close driver server
	I1028 10:56:17.572378  141007 main.go:141] libmachine: (addons-892779) Calling .Close
	I1028 10:56:17.572642  141007 main.go:141] libmachine: Successfully made call to close driver server
	I1028 10:56:17.572662  141007 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 10:56:17.574778  141007 addons.go:475] Verifying addon gcp-auth=true in "addons-892779"
	I1028 10:56:17.577220  141007 out.go:177] * Verifying gcp-auth addon...
	I1028 10:56:17.579814  141007 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1028 10:56:17.600397  141007 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1028 10:56:17.600429  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:17.700219  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:17.700512  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:17.800484  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:18.084251  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:18.192461  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:18.195205  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:18.280830  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:18.583585  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:18.586050  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:18.692420  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:18.694731  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:18.793852  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:19.087152  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:19.194577  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:19.196234  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:19.295800  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:19.585820  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:19.695247  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:19.695727  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:19.776939  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:20.084211  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:20.194258  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:20.195383  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:20.276273  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:20.584151  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:20.692189  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:20.693890  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:20.776464  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:21.083296  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:21.084491  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:21.192431  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:21.194165  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:21.277510  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:21.584798  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:21.692779  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:21.693981  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:21.777321  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:22.084734  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:22.192648  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:22.194619  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:22.277403  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:22.585931  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:22.694817  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:22.695158  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:22.987048  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:23.085848  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:23.087644  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:23.192827  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:23.197616  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:23.277254  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:23.583776  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:23.692563  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:23.694512  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:23.776915  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:24.084116  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:24.198793  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:24.199054  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:24.280040  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:24.583864  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:24.694913  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:24.698544  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:24.800076  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:25.084118  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:25.193087  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:25.194449  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:25.276823  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:25.583075  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:25.584246  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:25.695202  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:25.701115  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:25.777858  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:26.085518  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:26.195010  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:26.196129  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:26.278214  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:26.584334  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:26.849514  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:26.849795  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:26.850722  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:27.086474  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:27.192355  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:27.193853  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:27.277270  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:27.583434  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:27.696758  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:27.696871  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:27.796952  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:28.085430  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:28.087261  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:28.192653  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:28.193974  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:28.276986  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:28.584347  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:28.693883  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:28.695159  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:28.794128  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:29.084019  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:29.191790  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:29.193741  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:29.277218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:29.584303  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:29.695868  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:29.695916  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:29.778666  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:30.085306  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:30.192325  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:30.198135  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:30.278780  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:30.584899  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:30.585193  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:30.694439  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:30.694570  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:30.776646  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:31.083766  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:31.191838  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:31.195724  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:31.276799  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:31.583116  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:31.693850  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:31.694757  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:31.777077  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:32.083692  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:32.192732  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:32.195382  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:32.276546  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:32.584035  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:32.691729  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:32.693423  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:32.776977  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:33.094792  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:33.095746  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:33.194945  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:33.205811  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:33.277752  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:33.584266  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:33.693394  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:33.697236  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:33.778521  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:34.083518  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:34.193118  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:34.194850  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:34.276993  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:34.584405  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:34.693307  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:34.695021  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:34.777077  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:35.084362  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:35.192675  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:35.194794  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:35.276995  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:35.583858  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:35.585294  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:35.693012  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:35.699168  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:35.793469  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:36.083447  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:36.193326  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:36.195797  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:36.277222  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:36.584552  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:36.694974  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:36.695621  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:36.776938  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:37.083052  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:37.192991  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:37.194413  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:37.276828  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:37.583984  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:37.693219  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:37.695035  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:37.777290  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:38.087352  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:38.090529  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:38.193216  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:38.196931  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:38.276910  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:38.583000  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:38.692314  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:38.694353  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:38.907668  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:39.084726  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:39.194776  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:39.198218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:39.277094  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:39.584063  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:39.693417  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:39.694362  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:39.776817  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:40.083008  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:40.196305  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:40.198119  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:40.278007  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:40.923803  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:40.924062  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:40.924676  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:40.924790  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:40.926375  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:41.083295  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:41.194539  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:41.195023  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:41.297095  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:41.584206  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:41.693572  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:41.694828  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:41.778425  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:42.084715  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:42.193316  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:42.194801  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:42.277245  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:42.584752  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:42.692861  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:42.694288  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:42.776389  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:43.083730  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:43.084608  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:43.192971  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:43.194507  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:43.276636  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:43.585326  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:43.694145  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:43.694633  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:43.776499  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:44.085672  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:44.192856  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:44.194921  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:44.277553  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:44.583999  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:44.699369  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:44.700570  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:44.779222  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:45.085291  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:45.086190  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:45.194457  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:45.201722  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:45.276601  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:45.583235  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:45.692913  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:45.694580  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:45.777379  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:46.086914  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:46.194761  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:46.195303  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:46.278052  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:46.585119  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:46.692353  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:46.693860  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:46.776859  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:47.086720  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:47.192759  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:47.194070  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:47.293186  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:47.584079  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:47.584750  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:47.693535  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:47.694556  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:47.777634  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:48.083569  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:48.192759  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:48.195050  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:48.276937  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:48.606347  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:48.702142  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:48.702826  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:48.778544  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:49.083424  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:49.195410  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:49.198119  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:49.294695  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:49.584185  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:49.693138  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:49.695669  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:49.777685  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:50.082982  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:50.084579  141007 pod_ready.go:103] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"False"
	I1028 10:56:50.193441  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:50.194970  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1028 10:56:50.277299  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:50.583791  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:50.692518  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:50.694068  141007 kapi.go:107] duration metric: took 36.503429654s to wait for kubernetes.io/minikube-addons=registry ...
	I1028 10:56:50.776957  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:51.083849  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:51.193115  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:51.277769  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:51.584950  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:51.692998  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:51.776168  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:52.084492  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:52.191993  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:52.277086  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:52.585307  141007 pod_ready.go:93] pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.585332  141007 pod_ready.go:82] duration metric: took 42.508101058s for pod "amd-gpu-device-plugin-77nkc" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.585341  141007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6ck8n" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.586217  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:52.590416  141007 pod_ready.go:93] pod "coredns-7c65d6cfc9-6ck8n" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.590436  141007 pod_ready.go:82] duration metric: took 5.088551ms for pod "coredns-7c65d6cfc9-6ck8n" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.590445  141007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.593274  141007 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dpcnr" not found
	I1028 10:56:52.593296  141007 pod_ready.go:82] duration metric: took 2.845518ms for pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace to be "Ready" ...
	E1028 10:56:52.593307  141007 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-dpcnr" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dpcnr" not found
	I1028 10:56:52.593316  141007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.601714  141007 pod_ready.go:93] pod "etcd-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.601736  141007 pod_ready.go:82] duration metric: took 8.413215ms for pod "etcd-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.601745  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.612134  141007 pod_ready.go:93] pod "kube-apiserver-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.612163  141007 pod_ready.go:82] duration metric: took 10.410128ms for pod "kube-apiserver-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.612175  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.695000  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:52.778368  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:52.780492  141007 pod_ready.go:93] pod "kube-controller-manager-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:52.780513  141007 pod_ready.go:82] duration metric: took 168.331391ms for pod "kube-controller-manager-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:52.780525  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgxl7" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.083513  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:53.182558  141007 pod_ready.go:93] pod "kube-proxy-pgxl7" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:53.182589  141007 pod_ready.go:82] duration metric: took 402.056282ms for pod "kube-proxy-pgxl7" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.182603  141007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.191925  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:53.276800  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:53.583234  141007 pod_ready.go:93] pod "kube-scheduler-addons-892779" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:53.583258  141007 pod_ready.go:82] duration metric: took 400.648114ms for pod "kube-scheduler-addons-892779" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.583269  141007 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n492w" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.584869  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:53.692804  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:53.777187  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:53.982207  141007 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-n492w" in "kube-system" namespace has status "Ready":"True"
	I1028 10:56:53.982231  141007 pod_ready.go:82] duration metric: took 398.955646ms for pod "nvidia-device-plugin-daemonset-n492w" in "kube-system" namespace to be "Ready" ...
	I1028 10:56:53.982246  141007 pod_ready.go:39] duration metric: took 43.994575223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 10:56:53.982267  141007 api_server.go:52] waiting for apiserver process to appear ...
	I1028 10:56:53.982322  141007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 10:56:54.010437  141007 api_server.go:72] duration metric: took 49.110991575s to wait for apiserver process to appear ...
	I1028 10:56:54.010472  141007 api_server.go:88] waiting for apiserver healthz status ...
	I1028 10:56:54.010500  141007 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1028 10:56:54.014836  141007 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1028 10:56:54.015893  141007 api_server.go:141] control plane version: v1.31.2
	I1028 10:56:54.015919  141007 api_server.go:131] duration metric: took 5.439588ms to wait for apiserver health ...
	I1028 10:56:54.015928  141007 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 10:56:54.082985  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:54.187304  141007 system_pods.go:59] 18 kube-system pods found
	I1028 10:56:54.187342  141007 system_pods.go:61] "amd-gpu-device-plugin-77nkc" [9525ccf5-beb0-48e3-9612-30e31a087ca2] Running
	I1028 10:56:54.187350  141007 system_pods.go:61] "coredns-7c65d6cfc9-6ck8n" [22aed405-7302-480a-b873-02aecdc8c874] Running
	I1028 10:56:54.187360  141007 system_pods.go:61] "csi-hostpath-attacher-0" [596078c0-e9e3-4da9-99b7-fcf2ffb9ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 10:56:54.187367  141007 system_pods.go:61] "csi-hostpath-resizer-0" [2fc7fc41-f556-49a3-9922-73e16c67463a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 10:56:54.187378  141007 system_pods.go:61] "csi-hostpathplugin-f6btq" [100f5d1e-1127-4214-85ef-49474a262460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 10:56:54.187452  141007 system_pods.go:61] "etcd-addons-892779" [96816209-d6e4-41ff-a843-abb2caaa92f5] Running
	I1028 10:56:54.187462  141007 system_pods.go:61] "kube-apiserver-addons-892779" [fa0527e2-3605-47fa-8d62-ed7a49ae6a8d] Running
	I1028 10:56:54.187469  141007 system_pods.go:61] "kube-controller-manager-addons-892779" [5e473b38-93df-40f1-a084-586bce117796] Running
	I1028 10:56:54.187478  141007 system_pods.go:61] "kube-ingress-dns-minikube" [acf71611-aacb-4b72-aeb9-595f2d5717c0] Running
	I1028 10:56:54.187491  141007 system_pods.go:61] "kube-proxy-pgxl7" [3c85b65a-0083-48cd-8852-3ea8b3024bf3] Running
	I1028 10:56:54.187500  141007 system_pods.go:61] "kube-scheduler-addons-892779" [402a10fc-e775-4cea-84a4-6fec7e060c00] Running
	I1028 10:56:54.187509  141007 system_pods.go:61] "metrics-server-84c5f94fbc-748cp" [863279c2-0842-48b9-8840-31351b7a7bbc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 10:56:54.187518  141007 system_pods.go:61] "nvidia-device-plugin-daemonset-n492w" [17f0e2c2-6431-4f75-84a5-c4ccbb03c69f] Running
	I1028 10:56:54.187525  141007 system_pods.go:61] "registry-66c9cd494c-rnl5j" [5e520c13-81a2-4ebf-ab10-4fecd61cddd7] Running
	I1028 10:56:54.187534  141007 system_pods.go:61] "registry-proxy-7cjwq" [55548851-badf-40ba-a4b8-18d300af90f3] Running
	I1028 10:56:54.187544  141007 system_pods.go:61] "snapshot-controller-56fcc65765-82xbk" [f1f9cf16-2dec-41b4-9963-e49927080375] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.187556  141007 system_pods.go:61] "snapshot-controller-56fcc65765-mbt5s" [23af40a2-2f3d-4775-8bec-16437d1294f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.187565  141007 system_pods.go:61] "storage-provisioner" [5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a] Running
	I1028 10:56:54.187576  141007 system_pods.go:74] duration metric: took 171.64213ms to wait for pod list to return data ...
	I1028 10:56:54.187586  141007 default_sa.go:34] waiting for default service account to be created ...
	I1028 10:56:54.191290  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:54.277741  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:54.383185  141007 default_sa.go:45] found service account: "default"
	I1028 10:56:54.383211  141007 default_sa.go:55] duration metric: took 195.618354ms for default service account to be created ...
	I1028 10:56:54.383220  141007 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 10:56:54.589601  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:54.593134  141007 system_pods.go:86] 18 kube-system pods found
	I1028 10:56:54.593172  141007 system_pods.go:89] "amd-gpu-device-plugin-77nkc" [9525ccf5-beb0-48e3-9612-30e31a087ca2] Running
	I1028 10:56:54.593182  141007 system_pods.go:89] "coredns-7c65d6cfc9-6ck8n" [22aed405-7302-480a-b873-02aecdc8c874] Running
	I1028 10:56:54.593191  141007 system_pods.go:89] "csi-hostpath-attacher-0" [596078c0-e9e3-4da9-99b7-fcf2ffb9ffb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1028 10:56:54.593200  141007 system_pods.go:89] "csi-hostpath-resizer-0" [2fc7fc41-f556-49a3-9922-73e16c67463a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1028 10:56:54.593211  141007 system_pods.go:89] "csi-hostpathplugin-f6btq" [100f5d1e-1127-4214-85ef-49474a262460] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1028 10:56:54.593218  141007 system_pods.go:89] "etcd-addons-892779" [96816209-d6e4-41ff-a843-abb2caaa92f5] Running
	I1028 10:56:54.593227  141007 system_pods.go:89] "kube-apiserver-addons-892779" [fa0527e2-3605-47fa-8d62-ed7a49ae6a8d] Running
	I1028 10:56:54.593234  141007 system_pods.go:89] "kube-controller-manager-addons-892779" [5e473b38-93df-40f1-a084-586bce117796] Running
	I1028 10:56:54.593242  141007 system_pods.go:89] "kube-ingress-dns-minikube" [acf71611-aacb-4b72-aeb9-595f2d5717c0] Running
	I1028 10:56:54.593250  141007 system_pods.go:89] "kube-proxy-pgxl7" [3c85b65a-0083-48cd-8852-3ea8b3024bf3] Running
	I1028 10:56:54.593257  141007 system_pods.go:89] "kube-scheduler-addons-892779" [402a10fc-e775-4cea-84a4-6fec7e060c00] Running
	I1028 10:56:54.593266  141007 system_pods.go:89] "metrics-server-84c5f94fbc-748cp" [863279c2-0842-48b9-8840-31351b7a7bbc] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 10:56:54.593278  141007 system_pods.go:89] "nvidia-device-plugin-daemonset-n492w" [17f0e2c2-6431-4f75-84a5-c4ccbb03c69f] Running
	I1028 10:56:54.593291  141007 system_pods.go:89] "registry-66c9cd494c-rnl5j" [5e520c13-81a2-4ebf-ab10-4fecd61cddd7] Running
	I1028 10:56:54.593297  141007 system_pods.go:89] "registry-proxy-7cjwq" [55548851-badf-40ba-a4b8-18d300af90f3] Running
	I1028 10:56:54.593307  141007 system_pods.go:89] "snapshot-controller-56fcc65765-82xbk" [f1f9cf16-2dec-41b4-9963-e49927080375] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.593321  141007 system_pods.go:89] "snapshot-controller-56fcc65765-mbt5s" [23af40a2-2f3d-4775-8bec-16437d1294f9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1028 10:56:54.593327  141007 system_pods.go:89] "storage-provisioner" [5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a] Running
	I1028 10:56:54.593338  141007 system_pods.go:126] duration metric: took 210.110388ms to wait for k8s-apps to be running ...
	I1028 10:56:54.593349  141007 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 10:56:54.593396  141007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 10:56:54.620361  141007 system_svc.go:56] duration metric: took 27.001371ms WaitForService to wait for kubelet
	I1028 10:56:54.620398  141007 kubeadm.go:582] duration metric: took 49.720961891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 10:56:54.620421  141007 node_conditions.go:102] verifying NodePressure condition ...
	I1028 10:56:54.692631  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:54.776687  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:54.781705  141007 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 10:56:54.781731  141007 node_conditions.go:123] node cpu capacity is 2
	I1028 10:56:54.781744  141007 node_conditions.go:105] duration metric: took 161.31783ms to run NodePressure ...
	I1028 10:56:54.781757  141007 start.go:241] waiting for startup goroutines ...
	I1028 10:56:55.083056  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:55.193651  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:55.276306  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:55.583975  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:55.693766  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:55.777895  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:56.083420  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:56.192676  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:56.276330  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:56.584428  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:56.692881  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:56.776453  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:57.084148  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:57.192813  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:57.276679  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:57.583340  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:57.692145  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:57.776595  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:58.083373  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:58.192389  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:58.277641  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:58.583470  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:58.692898  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:58.914972  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:59.084643  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:59.192696  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:59.283814  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:56:59.584408  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:56:59.692673  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:56:59.776824  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:00.083625  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:00.192534  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:00.276438  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:00.585468  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:00.692565  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:00.776871  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:01.084545  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:01.192581  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:01.280875  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:01.584267  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:01.692355  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:01.777632  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:02.084478  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:02.194419  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:02.278218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:02.583474  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:02.694502  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:02.776918  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:03.083761  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:03.192967  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:03.276499  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:03.878769  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:03.879592  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:03.879676  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:04.083917  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:04.192945  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:04.277550  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:04.584188  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:04.692993  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:04.776907  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:05.083672  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:05.196850  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:05.276613  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:05.583439  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:05.694452  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:05.777080  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:06.089568  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:06.192524  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:06.276968  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:06.585136  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:06.694206  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:06.777921  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:07.083751  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:07.192887  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:07.276677  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:07.583744  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:07.692762  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:07.776542  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:08.083161  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:08.193032  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:08.276755  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:08.584530  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:08.692262  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:08.788827  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:09.083148  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:09.194362  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:09.278915  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:09.584245  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:09.700734  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:09.802603  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:10.084297  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:10.192575  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:10.277280  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:10.584141  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:10.703753  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:10.777603  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:11.084286  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:11.193191  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:11.276788  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:11.708223  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:11.811592  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:11.811878  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:12.084398  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:12.192998  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:12.279791  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:12.583244  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:12.691614  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:12.776065  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:13.084326  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:13.192371  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:13.277820  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:13.584733  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:13.698227  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:13.795473  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:14.083977  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:14.192815  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:14.277226  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:14.583164  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:14.692528  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:14.776655  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:15.084013  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:15.192759  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:15.276422  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:15.583623  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:15.692735  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:15.780542  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:16.084804  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:16.193654  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:16.277376  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:16.583932  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:16.692811  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:16.777384  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:17.084099  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:17.192930  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:17.278169  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:17.584160  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:17.693651  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:17.777044  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:18.085097  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:18.192318  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:18.277351  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:18.583944  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:18.693896  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:18.781372  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:19.084721  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:19.196147  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:19.299380  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:19.584220  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:19.693675  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:19.778218  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:20.085334  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:20.204506  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:20.283654  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:20.584516  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:20.696276  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:20.777308  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:21.083988  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:21.193325  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:21.277124  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:21.585993  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:21.696310  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:21.821988  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:22.086963  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:22.192786  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:22.276697  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:22.590465  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:22.692980  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:22.777509  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:23.083397  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:23.192305  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:23.277823  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:23.584596  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:23.697423  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:23.777271  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:24.084330  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:24.192371  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:24.278165  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:24.583545  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:24.692210  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:24.777081  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:25.083912  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:25.192884  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:25.276518  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:25.584699  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:25.692317  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:25.776896  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:26.084012  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:26.193255  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:26.277332  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:26.583659  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:26.692528  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:26.777105  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:27.087223  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:27.192159  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:27.277412  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:27.583121  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:27.692641  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:27.776264  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:28.085589  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:28.192176  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:28.277541  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:28.583532  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:28.692445  141007 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1028 10:57:28.786341  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:29.084467  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:29.192914  141007 kapi.go:107] duration metric: took 1m15.005065666s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1028 10:57:29.277646  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:29.584413  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:29.776547  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:30.083442  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:30.277322  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:30.584282  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:30.776934  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:31.083381  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:31.276968  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:31.584185  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:31.777057  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:32.084738  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:32.277898  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:32.586738  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:32.777269  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:33.083877  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:33.277096  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:33.584430  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1028 10:57:33.779412  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:34.084276  141007 kapi.go:107] duration metric: took 1m16.504448256s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1028 10:57:34.086123  141007 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-892779 cluster.
	I1028 10:57:34.087701  141007 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1028 10:57:34.089076  141007 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1028 10:57:34.276495  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:34.777139  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:35.276753  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:35.777692  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:36.276680  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:36.777135  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:37.278303  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:37.776940  141007 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1028 10:57:38.278485  141007 kapi.go:107] duration metric: took 1m22.506680703s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1028 10:57:38.280588  141007 out.go:177] * Enabled addons: default-storageclass, ingress-dns, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, inspektor-gadget, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1028 10:57:38.282251  141007 addons.go:510] duration metric: took 1m33.382788307s for enable addons: enabled=[default-storageclass ingress-dns amd-gpu-device-plugin cloud-spanner nvidia-device-plugin inspektor-gadget metrics-server storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1028 10:57:38.282300  141007 start.go:246] waiting for cluster config update ...
	I1028 10:57:38.282322  141007 start.go:255] writing updated cluster config ...
	I1028 10:57:38.282578  141007 ssh_runner.go:195] Run: rm -f paused
	I1028 10:57:38.337860  141007 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 10:57:38.339804  141007 out.go:177] * Done! kubectl is now configured to use "addons-892779" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.392251716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113424392222365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=581ca6de-fa63-446b-b904-0614c7327ee7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.392766911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dd792c3-f63b-4fa3-bcca-98e9be98ec6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.392829783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dd792c3-f63b-4fa3-bcca-98e9be98ec6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.393152573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45493c27b57f6314f0b50fe2adfd5d92b9957c5f5c62136eaa731df053687fca,PodSandboxId:e2546b31e218e78c0a0697e14dce01182ac744cdba7a0e0a1ddac9a24315d3fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730113247729004770,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-dv9b6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e92733f-d380-42bd-b6ae-3b7e7fdafb42,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-8
7b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e35168339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4dd792c3-f63b-4fa3-bcca-98e9be98ec6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.435578455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5d5b939-67cd-4c66-af63-25d953cfe278 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.435652795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5d5b939-67cd-4c66-af63-25d953cfe278 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.436930698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a7989a6-47e4-4b79-9e4e-7d8a5d68ad84 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.438158742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113424438127435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a7989a6-47e4-4b79-9e4e-7d8a5d68ad84 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.438915724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56c43087-dfda-4dc4-9f86-100ca19380d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.439014393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56c43087-dfda-4dc4-9f86-100ca19380d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.439264224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45493c27b57f6314f0b50fe2adfd5d92b9957c5f5c62136eaa731df053687fca,PodSandboxId:e2546b31e218e78c0a0697e14dce01182ac744cdba7a0e0a1ddac9a24315d3fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730113247729004770,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-dv9b6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e92733f-d380-42bd-b6ae-3b7e7fdafb42,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-8
7b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e35168339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56c43087-dfda-4dc4-9f86-100ca19380d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.479424332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80632c53-e9cd-401f-aed6-b13c5bdb4e8a name=/runtime.v1.RuntimeService/Version
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.479508499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80632c53-e9cd-401f-aed6-b13c5bdb4e8a name=/runtime.v1.RuntimeService/Version
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.481388176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4acf5950-3535-4c23-9d26-f686a43d6791 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.483578485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113424483549779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4acf5950-3535-4c23-9d26-f686a43d6791 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.484354068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2368001-95e0-4a46-942f-ad60673018ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.484410246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2368001-95e0-4a46-942f-ad60673018ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.484658853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45493c27b57f6314f0b50fe2adfd5d92b9957c5f5c62136eaa731df053687fca,PodSandboxId:e2546b31e218e78c0a0697e14dce01182ac744cdba7a0e0a1ddac9a24315d3fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730113247729004770,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-dv9b6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e92733f-d380-42bd-b6ae-3b7e7fdafb42,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-8
7b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e35168339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2368001-95e0-4a46-942f-ad60673018ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.520468304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b58e3198-8bb2-4ea6-9b5c-5d5ad618b425 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.520548727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b58e3198-8bb2-4ea6-9b5c-5d5ad618b425 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.522117035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=396ef0b3-f4c1-487a-8d13-be24b7651eb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.524137756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113424524097399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=396ef0b3-f4c1-487a-8d13-be24b7651eb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.528461596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2de24fb-25e2-4c17-b4dc-62e67ffd958f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.528651718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2de24fb-25e2-4c17-b4dc-62e67ffd958f name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:03:44 addons-892779 crio[665]: time="2024-10-28 11:03:44.529467604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45493c27b57f6314f0b50fe2adfd5d92b9957c5f5c62136eaa731df053687fca,PodSandboxId:e2546b31e218e78c0a0697e14dce01182ac744cdba7a0e0a1ddac9a24315d3fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1730113247729004770,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-dv9b6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e92733f-d380-42bd-b6ae-3b7e7fdafb42,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bbfd469316d63e98513adec00255a025913d65e3c42e43499d8e7f9dde137bf,PodSandboxId:56c70124ad7b9cf1d45e1d3185a3d5187090eaeca0bad90ebdec95bfad89167d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1730113108278734917,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2818a832-80db-43ce-ad06-1d48dd9ab54e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c943bfd8c1add619282928a9c70e4aae114aaa8b9c9f1101561b1a540fbaf976,PodSandboxId:c15cc11944ae6930d61769ef62b046eaac8edbe71566e3c36d65040133386ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730113062774249467,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da189efe-7ffa-4bdf-8
7b1-c414bec80098,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d766987fd8a8b8457f9cddfb073fe57f87ba6c692c204b21dc95bab725d6f56,PodSandboxId:7ce71747e8fd706948633a9c7f3c9b305ff31f2bc3be6a44e5b0db4525abfbbd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1730113012065808270,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-77nkc,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9525ccf5-beb0-48e3-9612-30e31a087ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c42ee46198bd2cf39a0f0d95e80f41547590799a8dd608b84ac08c1eac7eeaf,PodSandboxId:4123bc4c5f2a85ca9b5b3f2ab226bd429cd6d4e20d27de2de9b2be57bd2c9f58,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1730112986946771921,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-748cp,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 863279c2-0842-48b9-8840-31351b7a7bbc,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3,PodSandboxId:9d7b08ce55325cb76d1eb32007dfea4fb937669eda88604d5a6fe2a881502cf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730112972895303394,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eaf85a0-e54d-4caf-a194-bbab7aa5dc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982,PodSandboxId:1df353cc353110f318c4c2bc25bff2565de933e16806d45b9861a1560562f5a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730112968138612873,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ck8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22aed405-7302-480a-b873-02aecdc8c874,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333,PodSandboxId:49a4b2cf73cca1b0789a4194dca185e1a014e6f02100a26b52ca1e48eb1678e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730112965503468081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgxl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c85b65a-0083-48cd-8852-3ea8b3024bf3,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e,PodSandboxId:04a8e192f2825d752429db585411be59560d20bcf811b491dba0c69105b41d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730112954026512304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7012c93fee37dd1aba5ee6cd983cc2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7,PodSandboxId:7c50b2a8ac7b3205b570362fb4d90aadd61611653848fd01096481bd16541859,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e
294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730112953978189950,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 306263e3cf96cfdffef24db7e5f787e3,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2187e35168339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52,PodSandboxId:ad6a958974299b64fd49be850fe3d1c691052bb5325c0dd77fb50eaa75cad46b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b4450
3,State:CONTAINER_RUNNING,CreatedAt:1730112953972368798,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60773ad812876b76f1cfd70b128a82db,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074,PodSandboxId:3f911272f42032b5d0719ea81142c54b744d1d1161be8c01a9cc4854467f359d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1730112954004854375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892779,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2d358cbec9d53960f2e8c2a073980ca,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2de24fb-25e2-4c17-b4dc-62e67ffd958f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45493c27b57f6       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   e2546b31e218e       hello-world-app-55bf9c44b4-dv9b6
	6bbfd469316d6       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   56c70124ad7b9       nginx
	c943bfd8c1add       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   c15cc11944ae6       busybox
	9d766987fd8a8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                6 minutes ago       Running             amd-gpu-device-plugin     0                   7ce71747e8fd7       amd-gpu-device-plugin-77nkc
	1c42ee46198bd       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   4123bc4c5f2a8       metrics-server-84c5f94fbc-748cp
	1bc876b3fa526       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   9d7b08ce55325       storage-provisioner
	52148186558e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   1df353cc35311       coredns-7c65d6cfc9-6ck8n
	4d910baa7d462       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   49a4b2cf73cca       kube-proxy-pgxl7
	ddfe3ef897e6e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   04a8e192f2825       etcd-addons-892779
	064a9faa86b18       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   3f911272f4203       kube-apiserver-addons-892779
	84a50d3d1e447       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   7c50b2a8ac7b3       kube-scheduler-addons-892779
	e2187e3516833       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   ad6a958974299       kube-controller-manager-addons-892779
	
	
	==> coredns [52148186558e44b28834a1b5330ab2d741facbcee9c74bb2dbef5fa1b4438982] <==
	[INFO] 10.244.0.22:46627 - 38296 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000239756s
	[INFO] 10.244.0.22:46627 - 27945 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101647s
	[INFO] 10.244.0.22:46374 - 58418 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061742s
	[INFO] 10.244.0.22:46627 - 36402 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000172011s
	[INFO] 10.244.0.22:46627 - 24478 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085443s
	[INFO] 10.244.0.22:46627 - 42631 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055756s
	[INFO] 10.244.0.22:46374 - 64579 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000306214s
	[INFO] 10.244.0.22:46627 - 17438 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128544s
	[INFO] 10.244.0.22:46374 - 49740 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041531s
	[INFO] 10.244.0.22:46374 - 56197 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055942s
	[INFO] 10.244.0.22:46374 - 22301 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000150682s
	[INFO] 10.244.0.22:44761 - 30030 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095276s
	[INFO] 10.244.0.22:50402 - 58559 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070359s
	[INFO] 10.244.0.22:44761 - 23602 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075372s
	[INFO] 10.244.0.22:44761 - 49825 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037725s
	[INFO] 10.244.0.22:44761 - 32668 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003319s
	[INFO] 10.244.0.22:44761 - 4981 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039175s
	[INFO] 10.244.0.22:44761 - 62264 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002918s
	[INFO] 10.244.0.22:44761 - 58404 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037672s
	[INFO] 10.244.0.22:50402 - 20906 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059444s
	[INFO] 10.244.0.22:50402 - 24689 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069697s
	[INFO] 10.244.0.22:50402 - 57947 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064153s
	[INFO] 10.244.0.22:50402 - 43866 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061865s
	[INFO] 10.244.0.22:50402 - 34562 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072681s
	[INFO] 10.244.0.22:50402 - 25289 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074358s
	
	
	==> describe nodes <==
	Name:               addons-892779
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-892779
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=addons-892779
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T10_56_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-892779
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 10:55:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-892779
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:03:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:01:07 +0000   Mon, 28 Oct 2024 10:55:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:01:07 +0000   Mon, 28 Oct 2024 10:55:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:01:07 +0000   Mon, 28 Oct 2024 10:55:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:01:07 +0000   Mon, 28 Oct 2024 10:56:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    addons-892779
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e962354ae1a445f86f73c5d50c26841
	  System UUID:                8e962354-ae1a-445f-86f7-3c5d50c26841
	  Boot ID:                    109ad88a-d9b2-40ba-a8fe-b508dd97271e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     hello-world-app-55bf9c44b4-dv9b6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 amd-gpu-device-plugin-77nkc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 coredns-7c65d6cfc9-6ck8n                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m40s
	  kube-system                 etcd-addons-892779                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m46s
	  kube-system                 kube-apiserver-addons-892779             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 kube-controller-manager-addons-892779    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 kube-proxy-pgxl7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-scheduler-addons-892779             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 metrics-server-84c5f94fbc-748cp          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m33s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m38s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m51s (x8 over 7m51s)  kubelet          Node addons-892779 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s (x8 over 7m51s)  kubelet          Node addons-892779 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m51s (x7 over 7m51s)  kubelet          Node addons-892779 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m45s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m45s                  kubelet          Node addons-892779 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s                  kubelet          Node addons-892779 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s                  kubelet          Node addons-892779 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m44s                  kubelet          Node addons-892779 status is now: NodeReady
	  Normal  RegisteredNode           7m41s                  node-controller  Node addons-892779 event: Registered Node addons-892779 in Controller
	  Normal  CIDRAssignmentFailed     7m41s                  cidrAllocator    Node addons-892779 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +5.001334] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.179892] kauditd_printk_skb: 131 callbacks suppressed
	[  +9.281163] kauditd_printk_skb: 92 callbacks suppressed
	[ +10.928316] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.591371] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.069154] kauditd_printk_skb: 4 callbacks suppressed
	[Oct28 10:57] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.119476] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.261308] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.416894] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.348892] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.378789] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.256652] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.243666] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 10:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.528815] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.013248] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.485731] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.301666] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.556706] kauditd_printk_skb: 53 callbacks suppressed
	[ +21.459436] kauditd_printk_skb: 2 callbacks suppressed
	[Oct28 10:59] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.869945] kauditd_printk_skb: 7 callbacks suppressed
	[Oct28 11:00] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.388942] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [ddfe3ef897e6e4c33987355572acf535c8c0ab72f6fef373f1d195d8d2ff019e] <==
	{"level":"warn","ts":"2024-10-28T10:57:03.449892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.155709ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:03.450012Z","caller":"traceutil/trace.go:171","msg":"trace[515150145] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1001; }","duration":"123.206918ms","start":"2024-10-28T10:57:03.326714Z","end":"2024-10-28T10:57:03.449921Z","steps":["trace[515150145] 'agreement among raft nodes before linearized reading'  (duration: 122.594215ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:03.450382Z","caller":"traceutil/trace.go:171","msg":"trace[1439441168] transaction","detail":"{read_only:false; response_revision:1001; number_of_response:1; }","duration":"176.992769ms","start":"2024-10-28T10:57:03.273372Z","end":"2024-10-28T10:57:03.450365Z","steps":["trace[1439441168] 'process raft request'  (duration: 175.774551ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:57:03.861815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.187022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:03.861916Z","caller":"traceutil/trace.go:171","msg":"trace[991144534] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1001; }","duration":"182.297719ms","start":"2024-10-28T10:57:03.679608Z","end":"2024-10-28T10:57:03.861906Z","steps":["trace[991144534] 'range keys from in-memory index tree'  (duration: 182.143811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:57:03.862130Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.10578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:03.862172Z","caller":"traceutil/trace.go:171","msg":"trace[1342549661] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1001; }","duration":"290.15281ms","start":"2024-10-28T10:57:03.572013Z","end":"2024-10-28T10:57:03.862165Z","steps":["trace[1342549661] 'range keys from in-memory index tree'  (duration: 290.066027ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:11.693343Z","caller":"traceutil/trace.go:171","msg":"trace[841996832] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"191.31022ms","start":"2024-10-28T10:57:11.502018Z","end":"2024-10-28T10:57:11.693328Z","steps":["trace[841996832] 'process raft request'  (duration: 191.183504ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:11.693723Z","caller":"traceutil/trace.go:171","msg":"trace[836715530] linearizableReadLoop","detail":"{readStateIndex:1068; appliedIndex:1068; }","duration":"122.850036ms","start":"2024-10-28T10:57:11.570864Z","end":"2024-10-28T10:57:11.693714Z","steps":["trace[836715530] 'read index received'  (duration: 122.84571ms)","trace[836715530] 'applied index is now lower than readState.Index'  (duration: 3.876µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T10:57:11.693830Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.945281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:57:11.693879Z","caller":"traceutil/trace.go:171","msg":"trace[1462564819] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1034; }","duration":"123.013438ms","start":"2024-10-28T10:57:11.570860Z","end":"2024-10-28T10:57:11.693873Z","steps":["trace[1462564819] 'agreement among raft nodes before linearized reading'  (duration: 122.931146ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:57:11.697226Z","caller":"traceutil/trace.go:171","msg":"trace[1576052646] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"107.507643ms","start":"2024-10-28T10:57:11.589707Z","end":"2024-10-28T10:57:11.697214Z","steps":["trace[1576052646] 'process raft request'  (duration: 107.277096ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:58:02.187867Z","caller":"traceutil/trace.go:171","msg":"trace[527153399] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"350.011383ms","start":"2024-10-28T10:58:01.837840Z","end":"2024-10-28T10:58:02.187851Z","steps":["trace[527153399] 'process raft request'  (duration: 349.92303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:58:02.188070Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T10:58:01.837826Z","time spent":"350.168673ms","remote":"127.0.0.1:33176","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1305 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-10-28T10:58:02.188388Z","caller":"traceutil/trace.go:171","msg":"trace[624154602] linearizableReadLoop","detail":"{readStateIndex:1384; appliedIndex:1384; }","duration":"319.93912ms","start":"2024-10-28T10:58:01.868440Z","end":"2024-10-28T10:58:02.188379Z","steps":["trace[624154602] 'read index received'  (duration: 319.93669ms)","trace[624154602] 'applied index is now lower than readState.Index'  (duration: 1.999µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T10:58:02.188474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.049291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T10:58:02.188518Z","caller":"traceutil/trace.go:171","msg":"trace[1650348841] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1337; }","duration":"320.09806ms","start":"2024-10-28T10:58:01.868414Z","end":"2024-10-28T10:58:02.188512Z","steps":["trace[1650348841] 'agreement among raft nodes before linearized reading'  (duration: 320.036697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:58:02.188544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T10:58:01.868383Z","time spent":"320.155938ms","remote":"127.0.0.1:32868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-28T10:58:02.188685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.973153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-28T10:58:02.188722Z","caller":"traceutil/trace.go:171","msg":"trace[1340379804] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1337; }","duration":"289.009445ms","start":"2024-10-28T10:58:01.899708Z","end":"2024-10-28T10:58:02.188717Z","steps":["trace[1340379804] 'agreement among raft nodes before linearized reading'  (duration: 288.929858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T10:58:02.189199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.743396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-28T10:58:02.190903Z","caller":"traceutil/trace.go:171","msg":"trace[1361345971] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1337; }","duration":"125.443578ms","start":"2024-10-28T10:58:02.065443Z","end":"2024-10-28T10:58:02.190887Z","steps":["trace[1361345971] 'agreement among raft nodes before linearized reading'  (duration: 123.685479ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:58:36.631913Z","caller":"traceutil/trace.go:171","msg":"trace[1440678640] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"235.447347ms","start":"2024-10-28T10:58:36.396426Z","end":"2024-10-28T10:58:36.631873Z","steps":["trace[1440678640] 'process raft request'  (duration: 235.230015ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:58:51.658342Z","caller":"traceutil/trace.go:171","msg":"trace[654820603] transaction","detail":"{read_only:false; response_revision:1651; number_of_response:1; }","duration":"235.456005ms","start":"2024-10-28T10:58:51.422870Z","end":"2024-10-28T10:58:51.658326Z","steps":["trace[654820603] 'process raft request'  (duration: 235.356092ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T10:59:22.346860Z","caller":"traceutil/trace.go:171","msg":"trace[412176079] transaction","detail":"{read_only:false; response_revision:1753; number_of_response:1; }","duration":"275.340853ms","start":"2024-10-28T10:59:22.071508Z","end":"2024-10-28T10:59:22.346849Z","steps":["trace[412176079] 'process raft request'  (duration: 275.02163ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:03:44 up 8 min,  0 users,  load average: 0.15, 0.72, 0.56
	Linux addons-892779 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [064a9faa86b1838032c06de061cb7e0437be565324a77540d093a395e8bec074] <==
	I1028 10:57:36.987511       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1028 10:57:37.013716       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1028 10:57:48.123577       1 conn.go:339] Error on socket receive: read tcp 192.168.39.106:8443->192.168.39.1:48188: use of closed network connection
	E1028 10:57:48.311146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.106:8443->192.168.39.1:48200: use of closed network connection
	I1028 10:57:57.624007       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.48.229"}
	I1028 10:58:03.447192       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1028 10:58:04.490332       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1028 10:58:25.390292       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1028 10:58:25.648728       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.106.12"}
	E1028 10:58:44.334362       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1028 10:58:58.212703       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1028 10:59:23.200424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.200504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.227437       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.227492       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.239639       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.239700       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.279238       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.279293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1028 10:59:23.414209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1028 10:59:23.414332       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1028 10:59:24.227700       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1028 10:59:24.414895       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1028 10:59:24.426878       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1028 11:00:44.765315       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.58.42"}
	
	
	==> kube-controller-manager [e2187e35168339bc921a854f47f0bc151162610a8cc97a7ae9aafbe62afc8e52] <==
	E1028 11:01:22.165391       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:01:22.998018       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:01:22.998071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:01:38.848372       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:01:38.848440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:01:56.335181       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:01:56.335325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:02:08.379854       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:02:08.380054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:02:16.407452       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:02:16.407515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:02:19.429002       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:02:19.429056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:02:48.126358       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:02:48.126494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:02:49.537483       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:02:49.537584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:02:54.510795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:02:54.510913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:03:13.584532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:03:13.584651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:03:26.217301       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:03:26.217420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1028 11:03:26.971591       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1028 11:03:26.971725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4d910baa7d462d37b01a60c979631badf50b0ffabcbfd2752c0b48394277d333] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 10:56:06.297114       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 10:56:06.327429       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E1028 10:56:06.327586       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 10:56:06.448807       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 10:56:06.448839       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 10:56:06.448875       1 server_linux.go:169] "Using iptables Proxier"
	I1028 10:56:06.453750       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 10:56:06.454111       1 server.go:483] "Version info" version="v1.31.2"
	I1028 10:56:06.454124       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 10:56:06.455610       1 config.go:199] "Starting service config controller"
	I1028 10:56:06.455626       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 10:56:06.455657       1 config.go:105] "Starting endpoint slice config controller"
	I1028 10:56:06.455661       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 10:56:06.456340       1 config.go:328] "Starting node config controller"
	I1028 10:56:06.456352       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 10:56:06.557878       1 shared_informer.go:320] Caches are synced for node config
	I1028 10:56:06.557930       1 shared_informer.go:320] Caches are synced for service config
	I1028 10:56:06.558000       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [84a50d3d1e447328e3c2727d6ccf73f3a529041bea60e13a0633fd20aa8bd0a7] <==
	W1028 10:55:57.907526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1028 10:55:57.909337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:57.915393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 10:55:57.915584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:57.958210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:57.958300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.031173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 10:55:58.031226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.081308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:58.081445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.081871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 10:55:58.081933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.175132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:58.175182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.204982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 10:55:58.205110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.258721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 10:55:58.258816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.276427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 10:55:58.276750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.280041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 10:55:58.280150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 10:55:58.281844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 10:55:58.281910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1028 10:56:00.267199       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:02:20 addons-892779 kubelet[1203]: E1028 11:02:20.401394    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113340401018628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:20 addons-892779 kubelet[1203]: E1028 11:02:20.401495    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113340401018628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:30 addons-892779 kubelet[1203]: E1028 11:02:30.404211    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113350403770238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:30 addons-892779 kubelet[1203]: E1028 11:02:30.404257    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113350403770238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:40 addons-892779 kubelet[1203]: E1028 11:02:40.407277    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113360406754982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:40 addons-892779 kubelet[1203]: E1028 11:02:40.407339    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113360406754982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:45 addons-892779 kubelet[1203]: I1028 11:02:45.885008    1203 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 11:02:50 addons-892779 kubelet[1203]: E1028 11:02:50.410102    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113370409659572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:50 addons-892779 kubelet[1203]: E1028 11:02:50.410145    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113370409659572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:02:59 addons-892779 kubelet[1203]: E1028 11:02:59.898903    1203 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:02:59 addons-892779 kubelet[1203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:02:59 addons-892779 kubelet[1203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:02:59 addons-892779 kubelet[1203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:02:59 addons-892779 kubelet[1203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:03:00 addons-892779 kubelet[1203]: E1028 11:03:00.413006    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113380412521097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:00 addons-892779 kubelet[1203]: E1028 11:03:00.413034    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113380412521097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:01 addons-892779 kubelet[1203]: I1028 11:03:01.884167    1203 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-77nkc" secret="" err="secret \"gcp-auth\" not found"
	Oct 28 11:03:10 addons-892779 kubelet[1203]: E1028 11:03:10.416239    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113390415616342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:10 addons-892779 kubelet[1203]: E1028 11:03:10.416318    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113390415616342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:20 addons-892779 kubelet[1203]: E1028 11:03:20.419261    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113400418759035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:20 addons-892779 kubelet[1203]: E1028 11:03:20.419545    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113400418759035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:30 addons-892779 kubelet[1203]: E1028 11:03:30.422568    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113410422110514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:30 addons-892779 kubelet[1203]: E1028 11:03:30.422913    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113410422110514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:40 addons-892779 kubelet[1203]: E1028 11:03:40.426013    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113420425503891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:03:40 addons-892779 kubelet[1203]: E1028 11:03:40.426471    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730113420425503891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1bc876b3fa5264c44234b8a261ed673df8f98725bcc4a372b1c7453d914c9dc3] <==
	I1028 10:56:14.457485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 10:56:14.526420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 10:56:14.526483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 10:56:14.561540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 10:56:14.564476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-892779_92f07eb9-7e6b-4755-9966-4a2a450cacc0!
	I1028 10:56:14.577854       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afa8d239-0350-46d2-83fe-2e4e4ea51edf", APIVersion:"v1", ResourceVersion:"761", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-892779_92f07eb9-7e6b-4755-9966-4a2a450cacc0 became leader
	I1028 10:56:14.766496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-892779_92f07eb9-7e6b-4755-9966-4a2a450cacc0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-892779 -n addons-892779
helpers_test.go:261: (dbg) Run:  kubectl --context addons-892779 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (349.57s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-892779
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-892779: exit status 82 (2m0.49084396s)

                                                
                                                
-- stdout --
	* Stopping node "addons-892779"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-892779" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-892779
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-892779: exit status 11 (21.682890929s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-892779" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-892779
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-892779: exit status 11 (6.143148681s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-892779" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-892779
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-892779: exit status 11 (6.142660901s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-892779" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 node stop m02 -v=7 --alsologtostderr
E1028 11:16:31.825376  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:17:38.998706  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-928358 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.491596181s)

                                                
                                                
-- stdout --
	* Stopping node "ha-928358-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:15:52.877221  154841 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:15:52.877353  154841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:15:52.877361  154841 out.go:358] Setting ErrFile to fd 2...
	I1028 11:15:52.877366  154841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:15:52.877551  154841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:15:52.877799  154841 mustload.go:65] Loading cluster: ha-928358
	I1028 11:15:52.878206  154841 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:15:52.878221  154841 stop.go:39] StopHost: ha-928358-m02
	I1028 11:15:52.878569  154841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:15:52.878618  154841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:15:52.894096  154841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1028 11:15:52.894592  154841 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:15:52.895169  154841 main.go:141] libmachine: Using API Version  1
	I1028 11:15:52.895199  154841 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:15:52.895660  154841 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:15:52.898548  154841 out.go:177] * Stopping node "ha-928358-m02"  ...
	I1028 11:15:52.899967  154841 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 11:15:52.900016  154841 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:15:52.900313  154841 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 11:15:52.900347  154841 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:15:52.903932  154841 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:15:52.904496  154841 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:15:52.904543  154841 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:15:52.904755  154841 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:15:52.905100  154841 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:15:52.905282  154841 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:15:52.905451  154841 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:15:52.995234  154841 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 11:15:53.050417  154841 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 11:15:53.105827  154841 main.go:141] libmachine: Stopping "ha-928358-m02"...
	I1028 11:15:53.105868  154841 main.go:141] libmachine: (ha-928358-m02) Calling .GetState
	I1028 11:15:53.107667  154841 main.go:141] libmachine: (ha-928358-m02) Calling .Stop
	I1028 11:15:53.111643  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 0/120
	I1028 11:15:54.112828  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 1/120
	I1028 11:15:55.114301  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 2/120
	I1028 11:15:56.116299  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 3/120
	I1028 11:15:57.117755  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 4/120
	I1028 11:15:58.120043  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 5/120
	I1028 11:15:59.121365  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 6/120
	I1028 11:16:00.122919  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 7/120
	I1028 11:16:01.125241  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 8/120
	I1028 11:16:02.126530  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 9/120
	I1028 11:16:03.128200  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 10/120
	I1028 11:16:04.130018  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 11/120
	I1028 11:16:05.132156  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 12/120
	I1028 11:16:06.134087  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 13/120
	I1028 11:16:07.136009  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 14/120
	I1028 11:16:08.137877  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 15/120
	I1028 11:16:09.140332  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 16/120
	I1028 11:16:10.141892  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 17/120
	I1028 11:16:11.144286  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 18/120
	I1028 11:16:12.145936  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 19/120
	I1028 11:16:13.148149  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 20/120
	I1028 11:16:14.150146  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 21/120
	I1028 11:16:15.152016  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 22/120
	I1028 11:16:16.153434  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 23/120
	I1028 11:16:17.154927  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 24/120
	I1028 11:16:18.156956  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 25/120
	I1028 11:16:19.158317  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 26/120
	I1028 11:16:20.160456  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 27/120
	I1028 11:16:21.161650  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 28/120
	I1028 11:16:22.163320  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 29/120
	I1028 11:16:23.165594  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 30/120
	I1028 11:16:24.166919  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 31/120
	I1028 11:16:25.168407  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 32/120
	I1028 11:16:26.169857  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 33/120
	I1028 11:16:27.171867  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 34/120
	I1028 11:16:28.173822  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 35/120
	I1028 11:16:29.175221  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 36/120
	I1028 11:16:30.176969  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 37/120
	I1028 11:16:31.178451  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 38/120
	I1028 11:16:32.179927  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 39/120
	I1028 11:16:33.182319  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 40/120
	I1028 11:16:34.183593  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 41/120
	I1028 11:16:35.185058  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 42/120
	I1028 11:16:36.186370  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 43/120
	I1028 11:16:37.187852  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 44/120
	I1028 11:16:38.189875  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 45/120
	I1028 11:16:39.192021  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 46/120
	I1028 11:16:40.193687  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 47/120
	I1028 11:16:41.196200  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 48/120
	I1028 11:16:42.197639  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 49/120
	I1028 11:16:43.199940  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 50/120
	I1028 11:16:44.201419  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 51/120
	I1028 11:16:45.203116  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 52/120
	I1028 11:16:46.204686  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 53/120
	I1028 11:16:47.206085  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 54/120
	I1028 11:16:48.207786  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 55/120
	I1028 11:16:49.209096  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 56/120
	I1028 11:16:50.210572  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 57/120
	I1028 11:16:51.212263  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 58/120
	I1028 11:16:52.213640  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 59/120
	I1028 11:16:53.215708  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 60/120
	I1028 11:16:54.217347  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 61/120
	I1028 11:16:55.219090  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 62/120
	I1028 11:16:56.220588  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 63/120
	I1028 11:16:57.222691  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 64/120
	I1028 11:16:58.224642  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 65/120
	I1028 11:16:59.226155  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 66/120
	I1028 11:17:00.228020  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 67/120
	I1028 11:17:01.229987  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 68/120
	I1028 11:17:02.231391  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 69/120
	I1028 11:17:03.233659  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 70/120
	I1028 11:17:04.236142  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 71/120
	I1028 11:17:05.237419  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 72/120
	I1028 11:17:06.238850  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 73/120
	I1028 11:17:07.240328  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 74/120
	I1028 11:17:08.242670  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 75/120
	I1028 11:17:09.244675  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 76/120
	I1028 11:17:10.246448  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 77/120
	I1028 11:17:11.248013  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 78/120
	I1028 11:17:12.249613  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 79/120
	I1028 11:17:13.251940  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 80/120
	I1028 11:17:14.253447  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 81/120
	I1028 11:17:15.254817  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 82/120
	I1028 11:17:16.256734  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 83/120
	I1028 11:17:17.258836  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 84/120
	I1028 11:17:18.260853  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 85/120
	I1028 11:17:19.262364  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 86/120
	I1028 11:17:20.263910  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 87/120
	I1028 11:17:21.265595  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 88/120
	I1028 11:17:22.268060  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 89/120
	I1028 11:17:23.270424  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 90/120
	I1028 11:17:24.272095  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 91/120
	I1028 11:17:25.273709  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 92/120
	I1028 11:17:26.275318  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 93/120
	I1028 11:17:27.276801  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 94/120
	I1028 11:17:28.278971  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 95/120
	I1028 11:17:29.280596  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 96/120
	I1028 11:17:30.282056  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 97/120
	I1028 11:17:31.283895  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 98/120
	I1028 11:17:32.285785  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 99/120
	I1028 11:17:33.288243  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 100/120
	I1028 11:17:34.289795  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 101/120
	I1028 11:17:35.291549  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 102/120
	I1028 11:17:36.293249  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 103/120
	I1028 11:17:37.295082  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 104/120
	I1028 11:17:38.296562  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 105/120
	I1028 11:17:39.297975  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 106/120
	I1028 11:17:40.299224  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 107/120
	I1028 11:17:41.300734  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 108/120
	I1028 11:17:42.302216  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 109/120
	I1028 11:17:43.304507  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 110/120
	I1028 11:17:44.306056  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 111/120
	I1028 11:17:45.308091  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 112/120
	I1028 11:17:46.309620  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 113/120
	I1028 11:17:47.311073  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 114/120
	I1028 11:17:48.313587  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 115/120
	I1028 11:17:49.314994  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 116/120
	I1028 11:17:50.317052  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 117/120
	I1028 11:17:51.318357  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 118/120
	I1028 11:17:52.319833  154841 main.go:141] libmachine: (ha-928358-m02) Waiting for machine to stop 119/120
	I1028 11:17:53.320463  154841 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 11:17:53.320620  154841 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-928358 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
E1028 11:17:53.747152  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr: (18.878213321s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-928358 -n ha-928358
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 logs -n 25: (1.519924818s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m03_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m04 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp testdata/cp-test.txt                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m03 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-928358 node stop m02 -v=7                                                    | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:10:59
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:10:59.463321  150723 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:10:59.463437  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463447  150723 out.go:358] Setting ErrFile to fd 2...
	I1028 11:10:59.463453  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463619  150723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:10:59.464198  150723 out.go:352] Setting JSON to false
	I1028 11:10:59.465062  150723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3202,"bootTime":1730110657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:10:59.465170  150723 start.go:139] virtualization: kvm guest
	I1028 11:10:59.467541  150723 out.go:177] * [ha-928358] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:10:59.469144  150723 notify.go:220] Checking for updates...
	I1028 11:10:59.469164  150723 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:10:59.470932  150723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:10:59.472579  150723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:10:59.474106  150723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.476022  150723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:10:59.477386  150723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:10:59.478873  150723 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:10:59.515106  150723 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:10:59.516643  150723 start.go:297] selected driver: kvm2
	I1028 11:10:59.516662  150723 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:10:59.516677  150723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:10:59.517412  150723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.517509  150723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:10:59.533665  150723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:10:59.533714  150723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:10:59.533960  150723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:10:59.533991  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:10:59.534033  150723 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:10:59.534056  150723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:10:59.534109  150723 start.go:340] cluster config:
	{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:10:59.534204  150723 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.536334  150723 out.go:177] * Starting "ha-928358" primary control-plane node in "ha-928358" cluster
	I1028 11:10:59.537748  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:10:59.537794  150723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:10:59.537802  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:10:59.537881  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:10:59.537891  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:10:59.538184  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:10:59.538208  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json: {Name:mkb8dad6cb32a1c4cc26cae85e4e9234d9821c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:10:59.538374  150723 start.go:360] acquireMachinesLock for ha-928358: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:10:59.538406  150723 start.go:364] duration metric: took 16.963µs to acquireMachinesLock for "ha-928358"
	I1028 11:10:59.538425  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:10:59.538479  150723 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:10:59.540050  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:10:59.540188  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:59.540238  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:59.555032  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I1028 11:10:59.555455  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:59.555961  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:10:59.556000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:59.556420  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:59.556590  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:10:59.556764  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:10:59.556945  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:10:59.556977  150723 client.go:168] LocalClient.Create starting
	I1028 11:10:59.557015  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:10:59.557068  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557092  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557167  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:10:59.557195  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557226  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557253  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:10:59.557273  150723 main.go:141] libmachine: (ha-928358) Calling .PreCreateCheck
	I1028 11:10:59.557662  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:10:59.558063  150723 main.go:141] libmachine: Creating machine...
	I1028 11:10:59.558080  150723 main.go:141] libmachine: (ha-928358) Calling .Create
	I1028 11:10:59.558226  150723 main.go:141] libmachine: (ha-928358) Creating KVM machine...
	I1028 11:10:59.559811  150723 main.go:141] libmachine: (ha-928358) DBG | found existing default KVM network
	I1028 11:10:59.560481  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.560340  150746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I1028 11:10:59.560504  150723 main.go:141] libmachine: (ha-928358) DBG | created network xml: 
	I1028 11:10:59.560515  150723 main.go:141] libmachine: (ha-928358) DBG | <network>
	I1028 11:10:59.560521  150723 main.go:141] libmachine: (ha-928358) DBG |   <name>mk-ha-928358</name>
	I1028 11:10:59.560530  150723 main.go:141] libmachine: (ha-928358) DBG |   <dns enable='no'/>
	I1028 11:10:59.560536  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560547  150723 main.go:141] libmachine: (ha-928358) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:10:59.560555  150723 main.go:141] libmachine: (ha-928358) DBG |     <dhcp>
	I1028 11:10:59.560564  150723 main.go:141] libmachine: (ha-928358) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:10:59.560572  150723 main.go:141] libmachine: (ha-928358) DBG |     </dhcp>
	I1028 11:10:59.560581  150723 main.go:141] libmachine: (ha-928358) DBG |   </ip>
	I1028 11:10:59.560587  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560595  150723 main.go:141] libmachine: (ha-928358) DBG | </network>
	I1028 11:10:59.560601  150723 main.go:141] libmachine: (ha-928358) DBG | 
	I1028 11:10:59.566260  150723 main.go:141] libmachine: (ha-928358) DBG | trying to create private KVM network mk-ha-928358 192.168.39.0/24...
	I1028 11:10:59.635650  150723 main.go:141] libmachine: (ha-928358) DBG | private KVM network mk-ha-928358 192.168.39.0/24 created
	I1028 11:10:59.635720  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.635608  150746 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.635745  150723 main.go:141] libmachine: (ha-928358) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.635835  150723 main.go:141] libmachine: (ha-928358) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:10:59.635904  150723 main.go:141] libmachine: (ha-928358) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:10:59.913193  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.913037  150746 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa...
	I1028 11:10:59.999912  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999757  150746 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk...
	I1028 11:10:59.999940  150723 main.go:141] libmachine: (ha-928358) DBG | Writing magic tar header
	I1028 11:10:59.999950  150723 main.go:141] libmachine: (ha-928358) DBG | Writing SSH key tar header
	I1028 11:10:59.999957  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999874  150746 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.999966  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358
	I1028 11:11:00.000011  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 (perms=drwx------)
	I1028 11:11:00.000025  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:00.000035  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:00.000055  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:00.000076  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:00.000090  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:00.000108  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:00.000117  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:00.000127  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:00.000138  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home
	I1028 11:11:00.000147  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:00.000160  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:00.000177  150723 main.go:141] libmachine: (ha-928358) DBG | Skipping /home - not owner
	I1028 11:11:00.000190  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:00.001605  150723 main.go:141] libmachine: (ha-928358) define libvirt domain using xml: 
	I1028 11:11:00.001643  150723 main.go:141] libmachine: (ha-928358) <domain type='kvm'>
	I1028 11:11:00.001657  150723 main.go:141] libmachine: (ha-928358)   <name>ha-928358</name>
	I1028 11:11:00.001672  150723 main.go:141] libmachine: (ha-928358)   <memory unit='MiB'>2200</memory>
	I1028 11:11:00.001685  150723 main.go:141] libmachine: (ha-928358)   <vcpu>2</vcpu>
	I1028 11:11:00.001693  150723 main.go:141] libmachine: (ha-928358)   <features>
	I1028 11:11:00.001703  150723 main.go:141] libmachine: (ha-928358)     <acpi/>
	I1028 11:11:00.001711  150723 main.go:141] libmachine: (ha-928358)     <apic/>
	I1028 11:11:00.001724  150723 main.go:141] libmachine: (ha-928358)     <pae/>
	I1028 11:11:00.001748  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.001760  150723 main.go:141] libmachine: (ha-928358)   </features>
	I1028 11:11:00.001770  150723 main.go:141] libmachine: (ha-928358)   <cpu mode='host-passthrough'>
	I1028 11:11:00.001783  150723 main.go:141] libmachine: (ha-928358)   
	I1028 11:11:00.001795  150723 main.go:141] libmachine: (ha-928358)   </cpu>
	I1028 11:11:00.001806  150723 main.go:141] libmachine: (ha-928358)   <os>
	I1028 11:11:00.001820  150723 main.go:141] libmachine: (ha-928358)     <type>hvm</type>
	I1028 11:11:00.001839  150723 main.go:141] libmachine: (ha-928358)     <boot dev='cdrom'/>
	I1028 11:11:00.001851  150723 main.go:141] libmachine: (ha-928358)     <boot dev='hd'/>
	I1028 11:11:00.001863  150723 main.go:141] libmachine: (ha-928358)     <bootmenu enable='no'/>
	I1028 11:11:00.001872  150723 main.go:141] libmachine: (ha-928358)   </os>
	I1028 11:11:00.001884  150723 main.go:141] libmachine: (ha-928358)   <devices>
	I1028 11:11:00.001898  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='cdrom'>
	I1028 11:11:00.001919  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/boot2docker.iso'/>
	I1028 11:11:00.001933  150723 main.go:141] libmachine: (ha-928358)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:00.001968  150723 main.go:141] libmachine: (ha-928358)       <readonly/>
	I1028 11:11:00.001991  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002008  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='disk'>
	I1028 11:11:00.002023  150723 main.go:141] libmachine: (ha-928358)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:00.002044  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk'/>
	I1028 11:11:00.002058  150723 main.go:141] libmachine: (ha-928358)       <target dev='hda' bus='virtio'/>
	I1028 11:11:00.002070  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002106  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002133  150723 main.go:141] libmachine: (ha-928358)       <source network='mk-ha-928358'/>
	I1028 11:11:00.002148  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002159  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002172  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002179  150723 main.go:141] libmachine: (ha-928358)       <source network='default'/>
	I1028 11:11:00.002190  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002197  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002206  150723 main.go:141] libmachine: (ha-928358)     <serial type='pty'>
	I1028 11:11:00.002210  150723 main.go:141] libmachine: (ha-928358)       <target port='0'/>
	I1028 11:11:00.002216  150723 main.go:141] libmachine: (ha-928358)     </serial>
	I1028 11:11:00.002226  150723 main.go:141] libmachine: (ha-928358)     <console type='pty'>
	I1028 11:11:00.002250  150723 main.go:141] libmachine: (ha-928358)       <target type='serial' port='0'/>
	I1028 11:11:00.002282  150723 main.go:141] libmachine: (ha-928358)     </console>
	I1028 11:11:00.002291  150723 main.go:141] libmachine: (ha-928358)     <rng model='virtio'>
	I1028 11:11:00.002297  150723 main.go:141] libmachine: (ha-928358)       <backend model='random'>/dev/random</backend>
	I1028 11:11:00.002303  150723 main.go:141] libmachine: (ha-928358)     </rng>
	I1028 11:11:00.002306  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002311  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002318  150723 main.go:141] libmachine: (ha-928358)   </devices>
	I1028 11:11:00.002323  150723 main.go:141] libmachine: (ha-928358) </domain>
	I1028 11:11:00.002328  150723 main.go:141] libmachine: (ha-928358) 
	I1028 11:11:00.006810  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:30:04:d3 in network default
	I1028 11:11:00.007391  150723 main.go:141] libmachine: (ha-928358) Ensuring networks are active...
	I1028 11:11:00.007412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:00.008229  150723 main.go:141] libmachine: (ha-928358) Ensuring network default is active
	I1028 11:11:00.008655  150723 main.go:141] libmachine: (ha-928358) Ensuring network mk-ha-928358 is active
	I1028 11:11:00.009320  150723 main.go:141] libmachine: (ha-928358) Getting domain xml...
	I1028 11:11:00.010062  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:01.218137  150723 main.go:141] libmachine: (ha-928358) Waiting to get IP...
	I1028 11:11:01.218922  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.219337  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.219385  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.219330  150746 retry.go:31] will retry after 310.252899ms: waiting for machine to come up
	I1028 11:11:01.530950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.531414  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.531437  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.531371  150746 retry.go:31] will retry after 282.464528ms: waiting for machine to come up
	I1028 11:11:01.815720  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.816159  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.816184  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.816121  150746 retry.go:31] will retry after 304.583775ms: waiting for machine to come up
	I1028 11:11:02.122718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.123224  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.123251  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.123154  150746 retry.go:31] will retry after 442.531578ms: waiting for machine to come up
	I1028 11:11:02.566777  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.567197  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.567222  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.567162  150746 retry.go:31] will retry after 677.799642ms: waiting for machine to come up
	I1028 11:11:03.246160  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.246663  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.246691  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.246611  150746 retry.go:31] will retry after 661.382392ms: waiting for machine to come up
	I1028 11:11:03.909443  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.909955  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.910006  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.909898  150746 retry.go:31] will retry after 1.086932803s: waiting for machine to come up
	I1028 11:11:04.997802  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:04.998295  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:04.998322  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:04.998231  150746 retry.go:31] will retry after 1.028978753s: waiting for machine to come up
	I1028 11:11:06.028312  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:06.028699  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:06.028724  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:06.028658  150746 retry.go:31] will retry after 1.229241603s: waiting for machine to come up
	I1028 11:11:07.259043  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:07.259415  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:07.259442  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:07.259356  150746 retry.go:31] will retry after 1.621101278s: waiting for machine to come up
	I1028 11:11:08.882760  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:08.883130  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:08.883166  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:08.883106  150746 retry.go:31] will retry after 2.010099388s: waiting for machine to come up
	I1028 11:11:10.894594  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:10.895005  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:10.895028  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:10.894965  150746 retry.go:31] will retry after 2.268994964s: waiting for machine to come up
	I1028 11:11:13.166469  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:13.166906  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:13.166930  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:13.166853  150746 retry.go:31] will retry after 2.964491157s: waiting for machine to come up
	I1028 11:11:16.134568  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:16.135014  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:16.135030  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:16.134978  150746 retry.go:31] will retry after 3.669669561s: waiting for machine to come up
	I1028 11:11:19.805844  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:19.806451  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:19.806483  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:19.806402  150746 retry.go:31] will retry after 6.986761695s: waiting for machine to come up
	I1028 11:11:26.796618  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797199  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has current primary IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797228  150723 main.go:141] libmachine: (ha-928358) Found IP for machine: 192.168.39.206
	I1028 11:11:26.797258  150723 main.go:141] libmachine: (ha-928358) Reserving static IP address...
	I1028 11:11:26.797624  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find host DHCP lease matching {name: "ha-928358", mac: "52:54:00:dd:b2:b7", ip: "192.168.39.206"} in network mk-ha-928358
	I1028 11:11:26.873582  150723 main.go:141] libmachine: (ha-928358) Reserved static IP address: 192.168.39.206
	I1028 11:11:26.873609  150723 main.go:141] libmachine: (ha-928358) Waiting for SSH to be available...
	I1028 11:11:26.873619  150723 main.go:141] libmachine: (ha-928358) DBG | Getting to WaitForSSH function...
	I1028 11:11:26.876283  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876750  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:26.876781  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876886  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH client type: external
	I1028 11:11:26.876901  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa (-rw-------)
	I1028 11:11:26.876929  150723 main.go:141] libmachine: (ha-928358) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:11:26.876941  150723 main.go:141] libmachine: (ha-928358) DBG | About to run SSH command:
	I1028 11:11:26.876952  150723 main.go:141] libmachine: (ha-928358) DBG | exit 0
	I1028 11:11:27.009708  150723 main.go:141] libmachine: (ha-928358) DBG | SSH cmd err, output: <nil>: 
	I1028 11:11:27.010071  150723 main.go:141] libmachine: (ha-928358) KVM machine creation complete!
	I1028 11:11:27.010352  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:27.010925  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011146  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011301  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:11:27.011311  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:27.012679  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:11:27.012693  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:11:27.012699  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:11:27.012704  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.014867  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015214  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.015263  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015327  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.015507  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015644  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015739  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.015911  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.016106  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.016117  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:11:27.128876  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.128903  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:11:27.128915  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.131646  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132081  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.132109  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132331  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.132525  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132697  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.133070  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.133229  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.133242  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:11:27.250569  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:11:27.250647  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:11:27.250657  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:11:27.250664  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.250929  150723 buildroot.go:166] provisioning hostname "ha-928358"
	I1028 11:11:27.250971  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.251130  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.253765  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254120  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.254146  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254297  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.254451  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254601  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254758  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.254909  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.255102  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.255118  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358 && echo "ha-928358" | sudo tee /etc/hostname
	I1028 11:11:27.384932  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:11:27.384962  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.387904  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388215  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.388243  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388516  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.388719  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.388884  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.389002  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.389152  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.389334  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.389355  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:11:27.516473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.516502  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:11:27.516519  150723 buildroot.go:174] setting up certificates
	I1028 11:11:27.516529  150723 provision.go:84] configureAuth start
	I1028 11:11:27.516537  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.516866  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:27.519682  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520053  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.520077  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520298  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.522648  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.522984  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.523022  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.523127  150723 provision.go:143] copyHostCerts
	I1028 11:11:27.523161  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523220  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:11:27.523235  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523317  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:11:27.523418  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523442  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:11:27.523451  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523494  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:11:27.523565  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523591  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:11:27.523600  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523634  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:11:27.523699  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358 san=[127.0.0.1 192.168.39.206 ha-928358 localhost minikube]
	I1028 11:11:27.652184  150723 provision.go:177] copyRemoteCerts
	I1028 11:11:27.652239  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:11:27.652263  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.655247  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655509  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.655537  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655747  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.655942  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.656141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.656367  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:27.747959  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:11:27.748026  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:11:27.773785  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:11:27.773875  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 11:11:27.798172  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:11:27.798246  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:11:27.823795  150723 provision.go:87] duration metric: took 307.251687ms to configureAuth
	I1028 11:11:27.823824  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:11:27.823999  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:27.824098  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.826733  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827058  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.827095  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827231  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.827430  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827593  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827720  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.827882  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.828064  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.828082  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:11:28.063521  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:11:28.063544  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:11:28.063563  150723 main.go:141] libmachine: (ha-928358) Calling .GetURL
	I1028 11:11:28.064889  150723 main.go:141] libmachine: (ha-928358) DBG | Using libvirt version 6000000
	I1028 11:11:28.067440  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.067909  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.067936  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.068169  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:11:28.068184  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:11:28.068190  150723 client.go:171] duration metric: took 28.511205055s to LocalClient.Create
	I1028 11:11:28.068213  150723 start.go:167] duration metric: took 28.511273119s to libmachine.API.Create "ha-928358"
	I1028 11:11:28.068224  150723 start.go:293] postStartSetup for "ha-928358" (driver="kvm2")
	I1028 11:11:28.068234  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:11:28.068250  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.068499  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:11:28.068524  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.070718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071018  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.071047  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071207  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.071391  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.071596  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.071768  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.160093  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:11:28.164580  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:11:28.164611  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:11:28.164677  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:11:28.164753  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:11:28.164768  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:11:28.164860  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:11:28.174780  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:28.200051  150723 start.go:296] duration metric: took 131.810016ms for postStartSetup
	I1028 11:11:28.200113  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:28.200681  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.203634  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204015  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.204039  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204248  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:28.204459  150723 start.go:128] duration metric: took 28.665968765s to createHost
	I1028 11:11:28.204486  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.206915  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207241  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.207270  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207406  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.207565  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207714  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207841  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.207995  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:28.208148  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:28.208158  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:11:28.326642  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113888.306870077
	
	I1028 11:11:28.326664  150723 fix.go:216] guest clock: 1730113888.306870077
	I1028 11:11:28.326674  150723 fix.go:229] Guest: 2024-10-28 11:11:28.306870077 +0000 UTC Remote: 2024-10-28 11:11:28.204471945 +0000 UTC m=+28.781211208 (delta=102.398132ms)
	I1028 11:11:28.326699  150723 fix.go:200] guest clock delta is within tolerance: 102.398132ms
	I1028 11:11:28.326706  150723 start.go:83] releasing machines lock for "ha-928358", held for 28.788289196s
	I1028 11:11:28.326726  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.327001  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.329581  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.329968  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.330003  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.330168  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330728  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330884  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330998  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:11:28.331060  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.331115  150723 ssh_runner.go:195] Run: cat /version.json
	I1028 11:11:28.331141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.333639  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.333966  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.333994  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334015  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334246  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334387  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.334412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334416  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334585  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334627  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.334755  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334771  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.334927  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.335084  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.419255  150723 ssh_runner.go:195] Run: systemctl --version
	I1028 11:11:28.450377  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:11:28.614960  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:11:28.621690  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:11:28.621762  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:11:28.640026  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:11:28.640058  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:11:28.640161  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:11:28.657821  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:11:28.673308  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:11:28.673372  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:11:28.688651  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:11:28.704016  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:11:28.829012  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:11:28.990202  150723 docker.go:233] disabling docker service ...
	I1028 11:11:28.990264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:11:29.006016  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:11:29.019798  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:11:29.148701  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:11:29.286836  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:11:29.301306  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:11:29.321180  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:11:29.321242  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.332417  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:11:29.332516  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.344116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.355229  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.366386  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:11:29.377683  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.388680  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.406712  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.418602  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:11:29.428422  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:11:29.428489  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:11:29.442860  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:11:29.453466  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:29.587618  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:11:29.702292  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:11:29.702379  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:11:29.708037  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:11:29.708101  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:11:29.712169  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:11:29.760681  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:11:29.760781  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.793958  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.827829  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:11:29.829108  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:29.831950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832308  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:29.832337  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832530  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:11:29.837077  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:29.850764  150723 kubeadm.go:883] updating cluster {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:11:29.850982  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:29.851067  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:29.884186  150723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:11:29.884257  150723 ssh_runner.go:195] Run: which lz4
	I1028 11:11:29.888297  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:11:29.888406  150723 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:11:29.892595  150723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:11:29.892630  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:11:31.364550  150723 crio.go:462] duration metric: took 1.47616531s to copy over tarball
	I1028 11:11:31.364646  150723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:11:33.492729  150723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128048416s)
	I1028 11:11:33.492765  150723 crio.go:469] duration metric: took 2.12817379s to extract the tarball
	I1028 11:11:33.492775  150723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:11:33.530789  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:33.576388  150723 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:11:33.576418  150723 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:11:33.576428  150723 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1028 11:11:33.576525  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:11:33.576597  150723 ssh_runner.go:195] Run: crio config
	I1028 11:11:33.628433  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:33.628457  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:33.628468  150723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:11:33.628490  150723 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-928358 NodeName:ha-928358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:11:33.628623  150723 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-928358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:11:33.628649  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:11:33.628693  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:11:33.645502  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:11:33.645637  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:11:33.645712  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:11:33.657169  150723 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:11:33.657234  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:11:33.668705  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:11:33.687712  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:11:33.707287  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:11:33.725968  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:11:33.745306  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:11:33.749954  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:33.764379  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:33.885154  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:11:33.902745  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.206
	I1028 11:11:33.902769  150723 certs.go:194] generating shared ca certs ...
	I1028 11:11:33.902784  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:33.902965  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:11:33.903024  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:11:33.903039  150723 certs.go:256] generating profile certs ...
	I1028 11:11:33.903106  150723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:11:33.903126  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt with IP's: []
	I1028 11:11:34.090717  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt ...
	I1028 11:11:34.090747  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt: {Name:mk3976b6be27fc4f31aa39dbf48c0afa90955478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.090957  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key ...
	I1028 11:11:34.090981  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key: {Name:mk302db81268b764894e98d850b90eaaced7a15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.091101  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923
	I1028 11:11:34.091124  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.254]
	I1028 11:11:34.335900  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 ...
	I1028 11:11:34.335935  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923: {Name:mk0008343e6fdd7a08b2d031f0ba617f7a66f590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336144  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 ...
	I1028 11:11:34.336163  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923: {Name:mkd6c56ea43ae5fd58d0e46e3c3070e385813140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336286  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:11:34.336450  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:11:34.336537  150723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:11:34.336559  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt with IP's: []
	I1028 11:11:34.464000  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt ...
	I1028 11:11:34.464029  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt: {Name:mkb9ddbbbcf10a07648ff0910f8f6f99edd94a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464231  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key ...
	I1028 11:11:34.464247  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key: {Name:mk17d0ad23ae67dc57b4cfd6ae702fbcda30c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464343  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:11:34.464369  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:11:34.464389  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:11:34.464407  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:11:34.464422  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:11:34.464435  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:11:34.464453  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:11:34.464472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:11:34.464549  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:11:34.464601  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:11:34.464617  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:11:34.464647  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:11:34.464682  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:11:34.464714  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:11:34.464766  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:34.464809  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.464829  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.464844  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.465667  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:11:34.492761  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:11:34.519090  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:11:34.544886  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:11:34.571307  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:11:34.596836  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:11:34.622460  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:11:34.648376  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:11:34.677988  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:11:34.708308  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:11:34.732512  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:11:34.757152  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:11:34.774559  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:11:34.780665  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:11:34.792209  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797675  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797733  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.804182  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:11:34.816617  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:11:34.829067  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834000  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834062  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.840080  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:11:34.851913  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:11:34.863842  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868862  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868942  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.875065  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:11:34.888703  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:11:34.893205  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:11:34.893271  150723 kubeadm.go:392] StartCluster: {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:11:34.893354  150723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:11:34.893425  150723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:11:34.932903  150723 cri.go:89] found id: ""
	I1028 11:11:34.932974  150723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:11:34.944526  150723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:11:34.956312  150723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:11:34.967457  150723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:11:34.967484  150723 kubeadm.go:157] found existing configuration files:
	
	I1028 11:11:34.967537  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:11:34.977810  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:11:34.977875  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:11:34.988232  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:11:34.998184  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:11:34.998247  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:11:35.008728  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.018729  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:11:35.018793  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.029800  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:11:35.040304  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:11:35.040357  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:11:35.050830  150723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:11:35.164435  150723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:11:35.164499  150723 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:11:35.281374  150723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:11:35.281556  150723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:11:35.281686  150723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:11:35.294386  150723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:11:35.479371  150723 out.go:235]   - Generating certificates and keys ...
	I1028 11:11:35.479512  150723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:11:35.479602  150723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:11:35.531977  150723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:11:35.706199  150723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:11:35.805605  150723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:11:35.955545  150723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:11:36.024313  150723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:11:36.024446  150723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.166366  150723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:11:36.166553  150723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.477451  150723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:11:36.529937  150723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:11:36.764928  150723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:11:36.765199  150723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:11:36.958542  150723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:11:37.098519  150723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:11:37.432447  150723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:11:37.510265  150723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:11:37.727523  150723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:11:37.728159  150723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:11:37.734975  150723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:11:37.736761  150723 out.go:235]   - Booting up control plane ...
	I1028 11:11:37.736891  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:11:37.737036  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:11:37.737392  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:11:37.761460  150723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:11:37.769245  150723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:11:37.769327  150723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:11:37.901440  150723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:11:37.901605  150723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:11:38.403804  150723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.460314ms
	I1028 11:11:38.403927  150723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:11:44.555956  150723 kubeadm.go:310] [api-check] The API server is healthy after 6.1544774s
	I1028 11:11:44.584149  150723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:11:44.607891  150723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:11:44.647415  150723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:11:44.647602  150723 kubeadm.go:310] [mark-control-plane] Marking the node ha-928358 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:11:44.670940  150723 kubeadm.go:310] [bootstrap-token] Using token: 7u74ui.ti422fa98pbd45zp
	I1028 11:11:44.672724  150723 out.go:235]   - Configuring RBAC rules ...
	I1028 11:11:44.672861  150723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:11:44.681325  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:11:44.701467  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:11:44.720481  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:11:44.731591  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:11:44.743611  150723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:11:44.968060  150723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:11:45.411017  150723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:11:45.970736  150723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:11:45.970791  150723 kubeadm.go:310] 
	I1028 11:11:45.970885  150723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:11:45.970911  150723 kubeadm.go:310] 
	I1028 11:11:45.971033  150723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:11:45.971045  150723 kubeadm.go:310] 
	I1028 11:11:45.971081  150723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:11:45.971155  150723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:11:45.971234  150723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:11:45.971246  150723 kubeadm.go:310] 
	I1028 11:11:45.971327  150723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:11:45.971346  150723 kubeadm.go:310] 
	I1028 11:11:45.971421  150723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:11:45.971432  150723 kubeadm.go:310] 
	I1028 11:11:45.971526  150723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:11:45.971668  150723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:11:45.971782  150723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:11:45.971802  150723 kubeadm.go:310] 
	I1028 11:11:45.971912  150723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:11:45.972050  150723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:11:45.972078  150723 kubeadm.go:310] 
	I1028 11:11:45.972201  150723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972360  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 11:11:45.972397  150723 kubeadm.go:310] 	--control-plane 
	I1028 11:11:45.972407  150723 kubeadm.go:310] 
	I1028 11:11:45.972546  150723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:11:45.972563  150723 kubeadm.go:310] 
	I1028 11:11:45.972685  150723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972831  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 11:11:45.973046  150723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:11:45.973098  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:45.973115  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:45.975136  150723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:11:45.976845  150723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:11:45.982665  150723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:11:45.982687  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:11:46.004414  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:11:46.391016  150723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:11:46.391108  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.391153  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358 minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=true
	I1028 11:11:46.556219  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.556239  150723 ops.go:34] apiserver oom_adj: -16
	I1028 11:11:47.056803  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:47.556401  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.057031  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.556648  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.056531  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.556278  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.056341  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.557096  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.688176  150723 kubeadm.go:1113] duration metric: took 4.297146148s to wait for elevateKubeSystemPrivileges
	I1028 11:11:50.688219  150723 kubeadm.go:394] duration metric: took 15.794958001s to StartCluster
	I1028 11:11:50.688240  150723 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.688317  150723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.689020  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.689264  150723 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:50.689283  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:11:50.689310  150723 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:11:50.689399  150723 addons.go:69] Setting storage-provisioner=true in profile "ha-928358"
	I1028 11:11:50.689294  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:11:50.689432  150723 addons.go:69] Setting default-storageclass=true in profile "ha-928358"
	I1028 11:11:50.689434  150723 addons.go:234] Setting addon storage-provisioner=true in "ha-928358"
	I1028 11:11:50.689444  150723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-928358"
	I1028 11:11:50.689473  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.689502  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:50.689978  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690024  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.690030  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690078  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.705787  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I1028 11:11:50.705799  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1028 11:11:50.706396  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706425  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706943  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.706961  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707116  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.707141  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707344  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707538  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707605  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.708242  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.708286  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.709865  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.710123  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:11:50.710718  150723 addons.go:234] Setting addon default-storageclass=true in "ha-928358"
	I1028 11:11:50.710749  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.710982  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.711007  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.711160  150723 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:11:50.724777  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I1028 11:11:50.725295  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.725751  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I1028 11:11:50.725906  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.725930  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.726287  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.726327  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.726526  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.726809  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.726831  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.727169  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.727730  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.727777  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.728384  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.730334  150723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:11:50.731788  150723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:50.731810  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:11:50.731829  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.735112  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735661  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.735681  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735902  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.736091  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.736234  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.736386  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.743829  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I1028 11:11:50.744355  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.744925  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.744949  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.745276  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.745461  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.747144  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.747358  150723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.747374  150723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:11:50.747388  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.749934  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750358  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.750397  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750503  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.750676  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.750813  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.750942  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.872575  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:11:50.921646  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.984303  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:51.311574  150723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:11:51.359517  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.359546  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.359929  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.359938  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.359978  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.359992  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.360011  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.360266  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.360332  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.360347  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.360405  150723 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:11:51.360435  150723 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:11:51.360539  150723 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:11:51.360552  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.360564  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.360580  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.370574  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:11:51.371224  150723 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:11:51.371242  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.371253  150723 round_trippers.go:473]     Content-Type: application/json
	I1028 11:11:51.371260  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.371264  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.378842  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:11:51.379088  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.379107  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.379391  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.379407  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.723667  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.723697  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724015  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724061  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.724071  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.724078  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724024  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.724319  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724335  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.726167  150723 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 11:11:51.727603  150723 addons.go:510] duration metric: took 1.038296123s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 11:11:51.727646  150723 start.go:246] waiting for cluster config update ...
	I1028 11:11:51.727661  150723 start.go:255] writing updated cluster config ...
	I1028 11:11:51.729506  150723 out.go:201] 
	I1028 11:11:51.731166  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:51.731233  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.732989  150723 out.go:177] * Starting "ha-928358-m02" control-plane node in "ha-928358" cluster
	I1028 11:11:51.734422  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:51.734443  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:11:51.734539  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:11:51.734550  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:11:51.734619  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.734790  150723 start.go:360] acquireMachinesLock for ha-928358-m02: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:11:51.734834  150723 start.go:364] duration metric: took 28.788µs to acquireMachinesLock for "ha-928358-m02"
	I1028 11:11:51.734851  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:51.734918  150723 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:11:51.736531  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:11:51.736608  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:51.736641  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:51.751347  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I1028 11:11:51.751714  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:51.752299  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:51.752328  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:51.752603  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:51.752792  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:11:51.752934  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:11:51.753123  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:11:51.753174  150723 client.go:168] LocalClient.Create starting
	I1028 11:11:51.753215  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:11:51.753263  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753289  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753362  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:11:51.753389  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753404  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753437  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:11:51.753449  150723 main.go:141] libmachine: (ha-928358-m02) Calling .PreCreateCheck
	I1028 11:11:51.753595  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:11:51.754006  150723 main.go:141] libmachine: Creating machine...
	I1028 11:11:51.754022  150723 main.go:141] libmachine: (ha-928358-m02) Calling .Create
	I1028 11:11:51.754205  150723 main.go:141] libmachine: (ha-928358-m02) Creating KVM machine...
	I1028 11:11:51.755415  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing default KVM network
	I1028 11:11:51.755582  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing private KVM network mk-ha-928358
	I1028 11:11:51.755707  150723 main.go:141] libmachine: (ha-928358-m02) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:51.755730  150723 main.go:141] libmachine: (ha-928358-m02) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:11:51.755821  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.755707  151103 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:51.755971  150723 main.go:141] libmachine: (ha-928358-m02) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:11:51.993174  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.993039  151103 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa...
	I1028 11:11:52.383008  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.382864  151103 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk...
	I1028 11:11:52.383053  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing magic tar header
	I1028 11:11:52.383094  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing SSH key tar header
	I1028 11:11:52.383117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.383029  151103 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:52.383167  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02
	I1028 11:11:52.383203  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:52.383214  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 (perms=drwx------)
	I1028 11:11:52.383224  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:52.383237  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:52.383258  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:52.383272  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:52.383295  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:52.383304  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:52.383313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:52.383324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:52.383332  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home
	I1028 11:11:52.383343  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Skipping /home - not owner
	I1028 11:11:52.383370  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:52.383390  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:52.384348  150723 main.go:141] libmachine: (ha-928358-m02) define libvirt domain using xml: 
	I1028 11:11:52.384373  150723 main.go:141] libmachine: (ha-928358-m02) <domain type='kvm'>
	I1028 11:11:52.384400  150723 main.go:141] libmachine: (ha-928358-m02)   <name>ha-928358-m02</name>
	I1028 11:11:52.384412  150723 main.go:141] libmachine: (ha-928358-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:11:52.384426  150723 main.go:141] libmachine: (ha-928358-m02)   <vcpu>2</vcpu>
	I1028 11:11:52.384436  150723 main.go:141] libmachine: (ha-928358-m02)   <features>
	I1028 11:11:52.384457  150723 main.go:141] libmachine: (ha-928358-m02)     <acpi/>
	I1028 11:11:52.384472  150723 main.go:141] libmachine: (ha-928358-m02)     <apic/>
	I1028 11:11:52.384478  150723 main.go:141] libmachine: (ha-928358-m02)     <pae/>
	I1028 11:11:52.384482  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384490  150723 main.go:141] libmachine: (ha-928358-m02)   </features>
	I1028 11:11:52.384494  150723 main.go:141] libmachine: (ha-928358-m02)   <cpu mode='host-passthrough'>
	I1028 11:11:52.384501  150723 main.go:141] libmachine: (ha-928358-m02)   
	I1028 11:11:52.384506  150723 main.go:141] libmachine: (ha-928358-m02)   </cpu>
	I1028 11:11:52.384511  150723 main.go:141] libmachine: (ha-928358-m02)   <os>
	I1028 11:11:52.384516  150723 main.go:141] libmachine: (ha-928358-m02)     <type>hvm</type>
	I1028 11:11:52.384522  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='cdrom'/>
	I1028 11:11:52.384526  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='hd'/>
	I1028 11:11:52.384531  150723 main.go:141] libmachine: (ha-928358-m02)     <bootmenu enable='no'/>
	I1028 11:11:52.384537  150723 main.go:141] libmachine: (ha-928358-m02)   </os>
	I1028 11:11:52.384561  150723 main.go:141] libmachine: (ha-928358-m02)   <devices>
	I1028 11:11:52.384580  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='cdrom'>
	I1028 11:11:52.384598  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/boot2docker.iso'/>
	I1028 11:11:52.384615  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:52.384624  150723 main.go:141] libmachine: (ha-928358-m02)       <readonly/>
	I1028 11:11:52.384628  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384634  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='disk'>
	I1028 11:11:52.384642  150723 main.go:141] libmachine: (ha-928358-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:52.384650  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk'/>
	I1028 11:11:52.384657  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:11:52.384661  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384668  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384674  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='mk-ha-928358'/>
	I1028 11:11:52.384681  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384688  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384692  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384698  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='default'/>
	I1028 11:11:52.384703  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384708  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384713  150723 main.go:141] libmachine: (ha-928358-m02)     <serial type='pty'>
	I1028 11:11:52.384742  150723 main.go:141] libmachine: (ha-928358-m02)       <target port='0'/>
	I1028 11:11:52.384769  150723 main.go:141] libmachine: (ha-928358-m02)     </serial>
	I1028 11:11:52.384791  150723 main.go:141] libmachine: (ha-928358-m02)     <console type='pty'>
	I1028 11:11:52.384814  150723 main.go:141] libmachine: (ha-928358-m02)       <target type='serial' port='0'/>
	I1028 11:11:52.384828  150723 main.go:141] libmachine: (ha-928358-m02)     </console>
	I1028 11:11:52.384840  150723 main.go:141] libmachine: (ha-928358-m02)     <rng model='virtio'>
	I1028 11:11:52.384852  150723 main.go:141] libmachine: (ha-928358-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:11:52.384859  150723 main.go:141] libmachine: (ha-928358-m02)     </rng>
	I1028 11:11:52.384865  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384887  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384900  150723 main.go:141] libmachine: (ha-928358-m02)   </devices>
	I1028 11:11:52.384910  150723 main.go:141] libmachine: (ha-928358-m02) </domain>
	I1028 11:11:52.384921  150723 main.go:141] libmachine: (ha-928358-m02) 
	I1028 11:11:52.391941  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:67:49 in network default
	I1028 11:11:52.392560  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring networks are active...
	I1028 11:11:52.392579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:52.393436  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network default is active
	I1028 11:11:52.393821  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network mk-ha-928358 is active
	I1028 11:11:52.394171  150723 main.go:141] libmachine: (ha-928358-m02) Getting domain xml...
	I1028 11:11:52.394853  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:53.630024  150723 main.go:141] libmachine: (ha-928358-m02) Waiting to get IP...
	I1028 11:11:53.630962  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.631449  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.631495  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.631430  151103 retry.go:31] will retry after 231.171985ms: waiting for machine to come up
	I1028 11:11:53.864111  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.864512  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.864546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.864499  151103 retry.go:31] will retry after 296.507043ms: waiting for machine to come up
	I1028 11:11:54.163050  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.163543  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.163593  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.163496  151103 retry.go:31] will retry after 357.855811ms: waiting for machine to come up
	I1028 11:11:54.523089  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.523546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.523575  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.523481  151103 retry.go:31] will retry after 569.003787ms: waiting for machine to come up
	I1028 11:11:55.094333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.094770  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.094795  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.094741  151103 retry.go:31] will retry after 495.310626ms: waiting for machine to come up
	I1028 11:11:55.591480  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.592037  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.592065  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.591984  151103 retry.go:31] will retry after 697.027358ms: waiting for machine to come up
	I1028 11:11:56.291011  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:56.291427  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:56.291455  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:56.291390  151103 retry.go:31] will retry after 819.98241ms: waiting for machine to come up
	I1028 11:11:57.112476  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:57.112920  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:57.112950  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:57.112861  151103 retry.go:31] will retry after 1.468451423s: waiting for machine to come up
	I1028 11:11:58.582633  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:58.583095  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:58.583117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:58.583044  151103 retry.go:31] will retry after 1.732332827s: waiting for machine to come up
	I1028 11:12:00.316579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:00.316974  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:00.317005  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:00.316915  151103 retry.go:31] will retry after 1.701246598s: waiting for machine to come up
	I1028 11:12:02.020279  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:02.020762  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:02.020780  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:02.020732  151103 retry.go:31] will retry after 2.239954262s: waiting for machine to come up
	I1028 11:12:04.262705  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:04.263103  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:04.263134  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:04.263076  151103 retry.go:31] will retry after 3.584543805s: waiting for machine to come up
	I1028 11:12:07.848824  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:07.849223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:07.849246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:07.849186  151103 retry.go:31] will retry after 4.083747812s: waiting for machine to come up
	I1028 11:12:11.934986  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:11.935519  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:11.935541  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:11.935464  151103 retry.go:31] will retry after 5.450262186s: waiting for machine to come up
	I1028 11:12:17.387598  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388014  150723 main.go:141] libmachine: (ha-928358-m02) Found IP for machine: 192.168.39.15
	I1028 11:12:17.388040  150723 main.go:141] libmachine: (ha-928358-m02) Reserving static IP address...
	I1028 11:12:17.388061  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has current primary IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388484  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find host DHCP lease matching {name: "ha-928358-m02", mac: "52:54:00:6f:70:28", ip: "192.168.39.15"} in network mk-ha-928358
	I1028 11:12:17.468628  150723 main.go:141] libmachine: (ha-928358-m02) Reserved static IP address: 192.168.39.15
	I1028 11:12:17.468659  150723 main.go:141] libmachine: (ha-928358-m02) Waiting for SSH to be available...
	I1028 11:12:17.468668  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Getting to WaitForSSH function...
	I1028 11:12:17.471501  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472007  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.472034  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472218  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH client type: external
	I1028 11:12:17.472251  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa (-rw-------)
	I1028 11:12:17.472281  150723 main.go:141] libmachine: (ha-928358-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:12:17.472296  150723 main.go:141] libmachine: (ha-928358-m02) DBG | About to run SSH command:
	I1028 11:12:17.472313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | exit 0
	I1028 11:12:17.602076  150723 main.go:141] libmachine: (ha-928358-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:12:17.602372  150723 main.go:141] libmachine: (ha-928358-m02) KVM machine creation complete!
	I1028 11:12:17.602744  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:17.603321  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603533  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603697  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:12:17.603728  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetState
	I1028 11:12:17.605258  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:12:17.605275  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:12:17.605282  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:12:17.605291  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.607333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607701  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.607721  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607912  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.608143  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608313  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608439  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.608583  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.608808  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.608820  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:12:17.721307  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:17.721336  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:12:17.721347  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.724798  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725194  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.725223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725409  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.725636  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725966  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.726099  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.726262  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.726279  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:12:17.838473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:12:17.838586  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:12:17.838602  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:12:17.838613  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.838892  150723 buildroot.go:166] provisioning hostname "ha-928358-m02"
	I1028 11:12:17.838917  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.839093  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.841883  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842317  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.842339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842472  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.842669  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842831  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842971  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.843156  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.843326  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.843338  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m02 && echo "ha-928358-m02" | sudo tee /etc/hostname
	I1028 11:12:17.968498  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m02
	
	I1028 11:12:17.968528  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.971246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971623  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.971653  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.971988  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972158  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972315  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.972474  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.972671  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.972693  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:12:18.095026  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:18.095079  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:12:18.095099  150723 buildroot.go:174] setting up certificates
	I1028 11:12:18.095111  150723 provision.go:84] configureAuth start
	I1028 11:12:18.095125  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:18.095406  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.098183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098549  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.098574  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098726  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.100797  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.101209  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101422  150723 provision.go:143] copyHostCerts
	I1028 11:12:18.101450  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101483  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:12:18.101493  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101585  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:12:18.101707  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101736  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:12:18.101747  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101792  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:12:18.101860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101880  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:12:18.101884  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101906  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:12:18.101972  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m02 san=[127.0.0.1 192.168.39.15 ha-928358-m02 localhost minikube]
	I1028 11:12:18.196094  150723 provision.go:177] copyRemoteCerts
	I1028 11:12:18.196152  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:12:18.196173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.198995  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199315  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.199339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199521  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.199709  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.199854  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.199983  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.288841  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:12:18.288936  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:12:18.314840  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:12:18.314910  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:12:18.341393  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:12:18.341485  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:12:18.366854  150723 provision.go:87] duration metric: took 271.722974ms to configureAuth
	I1028 11:12:18.366893  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:12:18.367124  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:18.367212  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.370267  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.370639  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370796  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.371029  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371307  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.371456  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.371620  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.371634  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:12:18.612895  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:12:18.612923  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:12:18.612931  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetURL
	I1028 11:12:18.614354  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using libvirt version 6000000
	I1028 11:12:18.616667  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617056  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.617087  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617192  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:12:18.617204  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:12:18.617212  150723 client.go:171] duration metric: took 26.86402649s to LocalClient.Create
	I1028 11:12:18.617234  150723 start.go:167] duration metric: took 26.864111247s to libmachine.API.Create "ha-928358"
	I1028 11:12:18.617248  150723 start.go:293] postStartSetup for "ha-928358-m02" (driver="kvm2")
	I1028 11:12:18.617264  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:12:18.617289  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.617583  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:12:18.617614  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.619991  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620293  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.620324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620465  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.620632  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.620807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.620947  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.709453  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:12:18.714006  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:12:18.714050  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:12:18.714135  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:12:18.714212  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:12:18.714223  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:12:18.714317  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:12:18.725069  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:18.750381  150723 start.go:296] duration metric: took 133.112799ms for postStartSetup
	I1028 11:12:18.750443  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:18.751083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.753465  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.753830  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.753860  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.754104  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:12:18.754302  150723 start.go:128] duration metric: took 27.019366662s to createHost
	I1028 11:12:18.754324  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.756274  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756584  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.756606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756746  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.756928  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757211  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.757395  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.757617  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.757632  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:12:18.870465  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113938.848702185
	
	I1028 11:12:18.870492  150723 fix.go:216] guest clock: 1730113938.848702185
	I1028 11:12:18.870502  150723 fix.go:229] Guest: 2024-10-28 11:12:18.848702185 +0000 UTC Remote: 2024-10-28 11:12:18.754313813 +0000 UTC m=+79.331053022 (delta=94.388372ms)
	I1028 11:12:18.870523  150723 fix.go:200] guest clock delta is within tolerance: 94.388372ms
	I1028 11:12:18.870530  150723 start.go:83] releasing machines lock for "ha-928358-m02", held for 27.135687063s
	I1028 11:12:18.870557  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.870818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.873499  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.873921  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.873952  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.876354  150723 out.go:177] * Found network options:
	I1028 11:12:18.877803  150723 out.go:177]   - NO_PROXY=192.168.39.206
	W1028 11:12:18.879297  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.879332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.879863  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880042  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880145  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:12:18.880199  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	W1028 11:12:18.880223  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.880307  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:12:18.880332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.882741  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883009  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883032  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883152  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883178  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883365  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883531  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.883570  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883597  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883673  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.883773  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883886  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883979  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.884097  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:19.140607  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:12:19.146803  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:12:19.146880  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:12:19.163725  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:12:19.163760  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:12:19.163823  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:12:19.180717  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:12:19.195299  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:12:19.195367  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:12:19.209555  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:12:19.223597  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:12:19.345039  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:12:19.505186  150723 docker.go:233] disabling docker service ...
	I1028 11:12:19.505264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:12:19.520570  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:12:19.534795  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:12:19.656005  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:12:19.777835  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:12:19.793076  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:12:19.813202  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:12:19.813275  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.824795  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:12:19.824878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.836376  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.847788  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.858444  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:12:19.869710  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.880881  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.900116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.910944  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:12:19.921199  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:12:19.921284  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:12:19.936681  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:12:19.954317  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:20.080754  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:12:20.180414  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:12:20.180503  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:12:20.185906  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:12:20.185979  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:12:20.190133  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:12:20.233553  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:12:20.233626  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.262764  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.298972  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:12:20.300478  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:12:20.301810  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:20.304361  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304709  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:20.304731  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304901  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:12:20.309556  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:20.323672  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:12:20.323882  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:20.324235  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.324287  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.339013  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I1028 11:12:20.339463  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.340030  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.340052  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.340399  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.340615  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:12:20.342314  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:20.342631  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.342680  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.357539  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I1028 11:12:20.358002  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.358498  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.358519  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.359008  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.359212  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:20.359422  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.15
	I1028 11:12:20.359434  150723 certs.go:194] generating shared ca certs ...
	I1028 11:12:20.359450  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.359573  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:12:20.359614  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:12:20.359623  150723 certs.go:256] generating profile certs ...
	I1028 11:12:20.359689  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:12:20.359712  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94
	I1028 11:12:20.359727  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.254]
	I1028 11:12:20.442903  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 ...
	I1028 11:12:20.442934  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94: {Name:mk85a4e1a50b9026ab3d6dc4495b321bb7e02ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443115  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 ...
	I1028 11:12:20.443128  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94: {Name:mk7f773e25633de1a7b22c2c20b13ade22c5f211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443202  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:12:20.443334  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:12:20.443463  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:12:20.443480  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:12:20.443493  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:12:20.443506  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:12:20.443519  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:12:20.443535  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:12:20.443547  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:12:20.443559  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:12:20.443571  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:12:20.443620  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:12:20.443647  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:12:20.443657  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:12:20.443683  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:12:20.443705  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:12:20.443728  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:12:20.443767  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:20.443793  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:20.443806  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:12:20.443820  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:12:20.443852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:20.446971  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447376  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:20.447407  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447537  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:20.447754  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:20.447909  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:20.448040  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:20.533935  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:12:20.540194  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:12:20.553555  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:12:20.558471  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:12:20.571472  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:12:20.576267  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:12:20.588003  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:12:20.593338  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:12:20.605038  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:12:20.609724  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:12:20.623742  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:12:20.628679  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:12:20.640341  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:12:20.667017  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:12:20.692744  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:12:20.718588  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:12:20.748034  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:12:20.775373  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:12:20.802947  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:12:20.831097  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:12:20.858123  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:12:20.882703  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:12:20.907628  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:12:20.933325  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:12:20.951380  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:12:20.970398  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:12:20.988118  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:12:21.006403  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:12:21.027746  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:12:21.046174  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:12:21.066465  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:12:21.072838  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:12:21.086541  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091618  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091672  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.098303  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:12:21.110328  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:12:21.122629  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127701  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127772  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.134271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:12:21.146879  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:12:21.159782  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165113  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165173  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.171693  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:12:21.183939  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:12:21.188218  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:12:21.188285  150723 kubeadm.go:934] updating node {m02 192.168.39.15 8443 v1.31.2 crio true true} ...
	I1028 11:12:21.188380  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:12:21.188402  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:12:21.188440  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:12:21.207772  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:12:21.207836  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:12:21.207903  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.219161  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:12:21.219233  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.229788  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:12:21.229822  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229868  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:12:21.229883  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229901  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:12:21.234643  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:12:21.234682  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:12:22.169217  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.169290  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.175155  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:12:22.175187  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:12:22.612156  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:12:22.630404  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.630517  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.635637  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:12:22.635690  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:12:22.984793  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:12:22.995829  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:12:23.014631  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:12:23.033132  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:12:23.051694  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:12:23.056057  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:23.069704  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:23.193632  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:23.213616  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:23.214094  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:23.214154  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:23.229467  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I1028 11:12:23.229946  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:23.230470  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:23.230493  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:23.230811  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:23.231005  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:23.231156  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:12:23.231250  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:12:23.231265  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:23.234605  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235105  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:23.235130  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235484  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:23.235658  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:23.235817  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:23.235978  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:23.587402  150723 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:23.587450  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443"
	I1028 11:12:49.062311  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443": (25.474831461s)
	I1028 11:12:49.062358  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:12:49.750628  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m02 minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:12:49.901989  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:12:50.021163  150723 start.go:319] duration metric: took 26.789999674s to joinCluster
	I1028 11:12:50.021261  150723 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:50.021588  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:50.022686  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:12:50.024027  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:50.259666  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:50.294975  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:12:50.295261  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:12:50.295325  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:12:50.295539  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:12:50.295634  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.295644  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.295655  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.295661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.311123  150723 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1028 11:12:50.796718  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.796750  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.796761  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.796767  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.800704  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:51.296741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.296771  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.296783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.296789  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.301317  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:51.796429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.796461  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.796472  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.796479  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.902786  150723 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I1028 11:12:52.295866  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.295889  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.295896  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.295902  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.299707  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:52.300296  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:52.796802  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.796836  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.796848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.796854  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.801105  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:53.296430  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.296464  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.296476  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.296482  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.300401  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:53.796454  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.796475  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.796483  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.796487  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.800686  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:54.296632  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.296658  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.296669  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.296675  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.430413  150723 round_trippers.go:574] Response Status: 200 OK in 133 milliseconds
	I1028 11:12:54.431260  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:54.796228  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.796251  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.796260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.796297  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.799743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:55.295741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.295769  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.295779  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.295784  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.300264  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:55.796141  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.796166  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.796177  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.796183  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.799984  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.296002  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.296025  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.296033  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.296038  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.299236  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.796285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.796327  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.796338  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.796343  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.801079  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:56.801722  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:57.295973  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.296010  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.296019  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.296022  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.300070  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:57.796110  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.796138  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.796156  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.800286  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:58.296657  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.296684  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.296694  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.296700  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.300601  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:58.795760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.795783  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.795791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.795795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.799253  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.296427  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.296448  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.296457  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.296461  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.300112  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.300577  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:59.795852  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.795874  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.795882  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.795886  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.799187  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.296355  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.296376  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.296385  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.296388  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.300090  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.796212  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.796241  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.796250  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.796255  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.799643  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.296675  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.296698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.296706  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.296720  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.300506  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.300981  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:01.795747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.795781  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.795793  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.795800  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.799384  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.296561  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.296587  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.296595  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.296601  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.300227  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.796111  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.796139  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.796175  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.799502  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.295908  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.295932  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.295940  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.295944  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.299608  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.796579  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.796602  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.796611  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.796615  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.801307  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:03.802803  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:04.296022  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.296047  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.296055  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.296058  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.300556  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:04.796471  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.796494  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.796502  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.796507  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.801460  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:05.296387  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.296409  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.296417  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.296422  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.299743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:05.796148  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.796171  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.796179  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.796184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.801488  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:06.296441  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.296475  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.296487  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.296492  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.300636  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:06.301140  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:06.796015  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.796054  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.796067  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.796073  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.802178  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:07.295805  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.295832  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.295841  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.295845  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.300831  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:07.796368  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.796395  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.796407  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.796413  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.800287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.295819  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.295846  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.295856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.295862  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.303573  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:13:08.304813  150723 node_ready.go:49] node "ha-928358-m02" has status "Ready":"True"
	I1028 11:13:08.304842  150723 node_ready.go:38] duration metric: took 18.009284836s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:13:08.304855  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:08.304964  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:08.304977  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.304986  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.304996  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.314253  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:13:08.322556  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.322661  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:13:08.322674  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.322686  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.322694  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.325598  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.326235  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.326251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.326262  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.326267  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.329653  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.330306  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.330330  150723 pod_ready.go:82] duration metric: took 7.745243ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330344  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330420  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:13:08.330431  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.330443  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.330451  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.333854  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.334683  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.334698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.334709  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.334717  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.338575  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.339125  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.339151  150723 pod_ready.go:82] duration metric: took 8.79493ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339166  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339239  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:13:08.339251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.339260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.339266  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.342147  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.342887  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.342903  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.342914  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.342919  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.345586  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.346017  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.346037  150723 pod_ready.go:82] duration metric: took 6.859007ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346049  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346126  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:13:08.346136  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.346149  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.346155  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.349837  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.350760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.350776  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.350783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.350787  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.354111  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.354776  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.354797  150723 pod_ready.go:82] duration metric: took 8.74104ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.354818  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.496252  150723 request.go:632] Waited for 141.345028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496320  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.496333  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.496338  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.500168  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.696151  150723 request.go:632] Waited for 195.353851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696219  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696228  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.696240  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.696249  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.700151  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.701139  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.701160  150723 pod_ready.go:82] duration metric: took 346.331354ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.701174  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.896292  150723 request.go:632] Waited for 195.012978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896361  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896371  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.896387  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.896396  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.900050  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.096401  150723 request.go:632] Waited for 195.396634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096481  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.096489  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.096493  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.100986  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:09.101422  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.101442  150723 pod_ready.go:82] duration metric: took 400.258829ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.101456  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.296560  150723 request.go:632] Waited for 195.02851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296638  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296643  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.296654  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.296672  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.300596  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.496746  150723 request.go:632] Waited for 195.271102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496832  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496844  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.496856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.496863  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.500375  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.501182  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.501208  150723 pod_ready.go:82] duration metric: took 399.742852ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.501223  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.696672  150723 request.go:632] Waited for 195.364831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696753  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.696761  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.696765  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.700353  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.896500  150723 request.go:632] Waited for 195.402622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896557  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896562  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.896570  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.896574  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.899876  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.900586  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.900606  150723 pod_ready.go:82] duration metric: took 399.370555ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.900621  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.096828  150723 request.go:632] Waited for 196.099526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096889  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096895  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.096902  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.096907  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.100607  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.295935  150723 request.go:632] Waited for 194.296247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296028  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296036  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.296047  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.296052  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.299514  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.299992  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.300013  150723 pod_ready.go:82] duration metric: took 399.384578ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.300033  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.496260  150723 request.go:632] Waited for 196.135494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496330  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496339  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.496347  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.496352  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.500702  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:10.696747  150723 request.go:632] Waited for 195.398969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696828  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696834  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.696842  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.696849  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.700510  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.701486  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.701505  150723 pod_ready.go:82] duration metric: took 401.465094ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.701515  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.896720  150723 request.go:632] Waited for 195.109133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896777  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896783  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.896790  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.896795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.900315  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.096400  150723 request.go:632] Waited for 195.36981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096478  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096483  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.096493  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.096499  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.100065  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.100566  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.100590  150723 pod_ready.go:82] duration metric: took 399.065558ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.100600  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.296785  150723 request.go:632] Waited for 196.108788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296873  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296881  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.296891  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.296896  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.300760  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.495907  150723 request.go:632] Waited for 194.292764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.495994  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.496001  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.496011  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.496021  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.500420  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.500960  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.500979  150723 pod_ready.go:82] duration metric: took 400.371324ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.500991  150723 pod_ready.go:39] duration metric: took 3.196117998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:11.501012  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:13:11.501071  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:13:11.518775  150723 api_server.go:72] duration metric: took 21.497464525s to wait for apiserver process to appear ...
	I1028 11:13:11.518811  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:13:11.518839  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:13:11.523103  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:13:11.523168  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:13:11.523173  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.523180  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.523189  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.524064  150723 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:13:11.524163  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:13:11.524189  150723 api_server.go:131] duration metric: took 5.370992ms to wait for apiserver health ...
	I1028 11:13:11.524197  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:13:11.696656  150723 request.go:632] Waited for 172.384226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696727  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696733  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.696740  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.696744  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.702489  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:11.707749  150723 system_pods.go:59] 17 kube-system pods found
	I1028 11:13:11.707791  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:11.707798  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:11.707802  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:11.707805  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:11.707808  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:11.707812  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:11.707815  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:11.707818  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:11.707821  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:11.707824  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:11.707828  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:11.707831  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:11.707833  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:11.707837  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:11.707840  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:11.707843  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:11.707847  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:11.707852  150723 system_pods.go:74] duration metric: took 183.650264ms to wait for pod list to return data ...
	I1028 11:13:11.707863  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:13:11.895935  150723 request.go:632] Waited for 187.997842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895992  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895997  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.896004  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.896009  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.900031  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.900269  150723 default_sa.go:45] found service account: "default"
	I1028 11:13:11.900286  150723 default_sa.go:55] duration metric: took 192.416558ms for default service account to be created ...
	I1028 11:13:11.900298  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:13:12.096570  150723 request.go:632] Waited for 196.184771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096668  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096678  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.096690  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.096703  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.102990  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:13:12.107971  150723 system_pods.go:86] 17 kube-system pods found
	I1028 11:13:12.108008  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:12.108017  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:12.108022  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:12.108027  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:12.108032  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:12.108037  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:12.108044  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:12.108051  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:12.108056  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:12.108062  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:12.108067  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:12.108072  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:12.108076  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:12.108082  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:12.108088  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:12.108094  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:12.108101  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:12.108116  150723 system_pods.go:126] duration metric: took 207.810112ms to wait for k8s-apps to be running ...
	I1028 11:13:12.108138  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:13:12.108196  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:12.125765  150723 system_svc.go:56] duration metric: took 17.59726ms WaitForService to wait for kubelet
	I1028 11:13:12.125805  150723 kubeadm.go:582] duration metric: took 22.104503497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:13:12.125835  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:13:12.296271  150723 request.go:632] Waited for 170.346607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296352  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296358  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.296365  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.296370  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.301322  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:12.302235  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302261  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302297  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302303  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302310  150723 node_conditions.go:105] duration metric: took 176.469824ms to run NodePressure ...
	I1028 11:13:12.302331  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:13:12.302371  150723 start.go:255] writing updated cluster config ...
	I1028 11:13:12.304722  150723 out.go:201] 
	I1028 11:13:12.306493  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:12.306595  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.308496  150723 out.go:177] * Starting "ha-928358-m03" control-plane node in "ha-928358" cluster
	I1028 11:13:12.310210  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:13:12.310234  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:13:12.310336  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:13:12.310347  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:13:12.310430  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.310601  150723 start.go:360] acquireMachinesLock for ha-928358-m03: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:13:12.310642  150723 start.go:364] duration metric: took 22.061µs to acquireMachinesLock for "ha-928358-m03"
	I1028 11:13:12.310662  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:12.310748  150723 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:13:12.312443  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:13:12.312555  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:12.312596  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:12.327768  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I1028 11:13:12.328249  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:12.328745  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:12.328765  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:12.329102  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:12.329311  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:12.329448  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:12.329611  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:13:12.329642  150723 client.go:168] LocalClient.Create starting
	I1028 11:13:12.329670  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:13:12.329703  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329720  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329768  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:13:12.329788  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329799  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329815  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:13:12.329826  150723 main.go:141] libmachine: (ha-928358-m03) Calling .PreCreateCheck
	I1028 11:13:12.329995  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:12.330372  150723 main.go:141] libmachine: Creating machine...
	I1028 11:13:12.330386  150723 main.go:141] libmachine: (ha-928358-m03) Calling .Create
	I1028 11:13:12.330528  150723 main.go:141] libmachine: (ha-928358-m03) Creating KVM machine...
	I1028 11:13:12.331834  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing default KVM network
	I1028 11:13:12.332000  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing private KVM network mk-ha-928358
	I1028 11:13:12.332124  150723 main.go:141] libmachine: (ha-928358-m03) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.332140  150723 main.go:141] libmachine: (ha-928358-m03) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:13:12.332221  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.332127  151534 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.332333  150723 main.go:141] libmachine: (ha-928358-m03) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:13:12.597391  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.597227  151534 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa...
	I1028 11:13:12.699922  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699777  151534 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk...
	I1028 11:13:12.699960  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing magic tar header
	I1028 11:13:12.699975  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing SSH key tar header
	I1028 11:13:12.699986  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699933  151534 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.700170  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 (perms=drwx------)
	I1028 11:13:12.700205  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:13:12.700218  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03
	I1028 11:13:12.700232  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:13:12.700244  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:13:12.700258  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:13:12.700271  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.700287  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:13:12.700300  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:13:12.700313  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:13:12.700325  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:13:12.700339  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:12.700363  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:13:12.700371  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home
	I1028 11:13:12.700395  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Skipping /home - not owner
	I1028 11:13:12.701297  150723 main.go:141] libmachine: (ha-928358-m03) define libvirt domain using xml: 
	I1028 11:13:12.701328  150723 main.go:141] libmachine: (ha-928358-m03) <domain type='kvm'>
	I1028 11:13:12.701339  150723 main.go:141] libmachine: (ha-928358-m03)   <name>ha-928358-m03</name>
	I1028 11:13:12.701346  150723 main.go:141] libmachine: (ha-928358-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:13:12.701358  150723 main.go:141] libmachine: (ha-928358-m03)   <vcpu>2</vcpu>
	I1028 11:13:12.701364  150723 main.go:141] libmachine: (ha-928358-m03)   <features>
	I1028 11:13:12.701373  150723 main.go:141] libmachine: (ha-928358-m03)     <acpi/>
	I1028 11:13:12.701383  150723 main.go:141] libmachine: (ha-928358-m03)     <apic/>
	I1028 11:13:12.701391  150723 main.go:141] libmachine: (ha-928358-m03)     <pae/>
	I1028 11:13:12.701404  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701415  150723 main.go:141] libmachine: (ha-928358-m03)   </features>
	I1028 11:13:12.701423  150723 main.go:141] libmachine: (ha-928358-m03)   <cpu mode='host-passthrough'>
	I1028 11:13:12.701433  150723 main.go:141] libmachine: (ha-928358-m03)   
	I1028 11:13:12.701445  150723 main.go:141] libmachine: (ha-928358-m03)   </cpu>
	I1028 11:13:12.701456  150723 main.go:141] libmachine: (ha-928358-m03)   <os>
	I1028 11:13:12.701463  150723 main.go:141] libmachine: (ha-928358-m03)     <type>hvm</type>
	I1028 11:13:12.701472  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='cdrom'/>
	I1028 11:13:12.701478  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='hd'/>
	I1028 11:13:12.701513  150723 main.go:141] libmachine: (ha-928358-m03)     <bootmenu enable='no'/>
	I1028 11:13:12.701555  150723 main.go:141] libmachine: (ha-928358-m03)   </os>
	I1028 11:13:12.701565  150723 main.go:141] libmachine: (ha-928358-m03)   <devices>
	I1028 11:13:12.701573  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='cdrom'>
	I1028 11:13:12.701585  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/boot2docker.iso'/>
	I1028 11:13:12.701593  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:13:12.701600  150723 main.go:141] libmachine: (ha-928358-m03)       <readonly/>
	I1028 11:13:12.701607  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701622  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='disk'>
	I1028 11:13:12.701635  150723 main.go:141] libmachine: (ha-928358-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:13:12.701651  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk'/>
	I1028 11:13:12.701662  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:13:12.701673  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701683  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701717  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='mk-ha-928358'/>
	I1028 11:13:12.701741  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701754  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701765  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701776  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='default'/>
	I1028 11:13:12.701787  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701800  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701809  150723 main.go:141] libmachine: (ha-928358-m03)     <serial type='pty'>
	I1028 11:13:12.701821  150723 main.go:141] libmachine: (ha-928358-m03)       <target port='0'/>
	I1028 11:13:12.701833  150723 main.go:141] libmachine: (ha-928358-m03)     </serial>
	I1028 11:13:12.701844  150723 main.go:141] libmachine: (ha-928358-m03)     <console type='pty'>
	I1028 11:13:12.701855  150723 main.go:141] libmachine: (ha-928358-m03)       <target type='serial' port='0'/>
	I1028 11:13:12.701866  150723 main.go:141] libmachine: (ha-928358-m03)     </console>
	I1028 11:13:12.701874  150723 main.go:141] libmachine: (ha-928358-m03)     <rng model='virtio'>
	I1028 11:13:12.701883  150723 main.go:141] libmachine: (ha-928358-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:13:12.701898  150723 main.go:141] libmachine: (ha-928358-m03)     </rng>
	I1028 11:13:12.701909  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701917  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701927  150723 main.go:141] libmachine: (ha-928358-m03)   </devices>
	I1028 11:13:12.701935  150723 main.go:141] libmachine: (ha-928358-m03) </domain>
	I1028 11:13:12.701944  150723 main.go:141] libmachine: (ha-928358-m03) 
	I1028 11:13:12.709093  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:b5:fb:00 in network default
	I1028 11:13:12.709827  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring networks are active...
	I1028 11:13:12.709849  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:12.710555  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network default is active
	I1028 11:13:12.710786  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network mk-ha-928358 is active
	I1028 11:13:12.711115  150723 main.go:141] libmachine: (ha-928358-m03) Getting domain xml...
	I1028 11:13:12.711807  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:13.995752  150723 main.go:141] libmachine: (ha-928358-m03) Waiting to get IP...
	I1028 11:13:13.996563  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:13.997045  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:13.997085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:13.997018  151534 retry.go:31] will retry after 234.151571ms: waiting for machine to come up
	I1028 11:13:14.232519  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.233064  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.233096  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.232999  151534 retry.go:31] will retry after 249.582339ms: waiting for machine to come up
	I1028 11:13:14.484383  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.484878  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.484915  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.484812  151534 retry.go:31] will retry after 409.553215ms: waiting for machine to come up
	I1028 11:13:14.896380  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.896855  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.896887  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.896797  151534 retry.go:31] will retry after 412.085621ms: waiting for machine to come up
	I1028 11:13:15.310086  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.310769  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.310799  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.310719  151534 retry.go:31] will retry after 651.315136ms: waiting for machine to come up
	I1028 11:13:15.963589  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.964049  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.964078  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.963990  151534 retry.go:31] will retry after 936.522294ms: waiting for machine to come up
	I1028 11:13:16.902173  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:16.902668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:16.902689  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:16.902618  151534 retry.go:31] will retry after 774.455135ms: waiting for machine to come up
	I1028 11:13:17.679023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:17.679574  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:17.679600  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:17.679540  151534 retry.go:31] will retry after 1.069131352s: waiting for machine to come up
	I1028 11:13:18.750780  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:18.751352  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:18.751375  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:18.751284  151534 retry.go:31] will retry after 1.587573663s: waiting for machine to come up
	I1028 11:13:20.340206  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:20.340612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:20.340643  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:20.340566  151534 retry.go:31] will retry after 1.424108777s: waiting for machine to come up
	I1028 11:13:21.766872  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:21.767376  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:21.767397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:21.767337  151534 retry.go:31] will retry after 1.867673803s: waiting for machine to come up
	I1028 11:13:23.637608  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:23.638075  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:23.638103  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:23.638049  151534 retry.go:31] will retry after 3.385284423s: waiting for machine to come up
	I1028 11:13:27.027812  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:27.028397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:27.028423  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:27.028342  151534 retry.go:31] will retry after 4.143137357s: waiting for machine to come up
	I1028 11:13:31.174612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:31.174990  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:31.175020  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:31.174951  151534 retry.go:31] will retry after 3.870983412s: waiting for machine to come up
	I1028 11:13:35.049044  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has current primary IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049716  150723 main.go:141] libmachine: (ha-928358-m03) Found IP for machine: 192.168.39.44
	I1028 11:13:35.049734  150723 main.go:141] libmachine: (ha-928358-m03) Reserving static IP address...
	I1028 11:13:35.050296  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find host DHCP lease matching {name: "ha-928358-m03", mac: "52:54:00:7e:d3:f9", ip: "192.168.39.44"} in network mk-ha-928358
	I1028 11:13:35.126256  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Getting to WaitForSSH function...
	I1028 11:13:35.126303  150723 main.go:141] libmachine: (ha-928358-m03) Reserved static IP address: 192.168.39.44
	I1028 11:13:35.126318  150723 main.go:141] libmachine: (ha-928358-m03) Waiting for SSH to be available...
	I1028 11:13:35.128851  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129272  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.129315  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129446  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH client type: external
	I1028 11:13:35.129476  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa (-rw-------)
	I1028 11:13:35.129507  150723 main.go:141] libmachine: (ha-928358-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:13:35.129520  150723 main.go:141] libmachine: (ha-928358-m03) DBG | About to run SSH command:
	I1028 11:13:35.129564  150723 main.go:141] libmachine: (ha-928358-m03) DBG | exit 0
	I1028 11:13:35.253921  150723 main.go:141] libmachine: (ha-928358-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:13:35.254211  150723 main.go:141] libmachine: (ha-928358-m03) KVM machine creation complete!
	I1028 11:13:35.254512  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:35.255052  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255255  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255399  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:13:35.255411  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetState
	I1028 11:13:35.256908  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:13:35.256921  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:13:35.256927  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:13:35.256932  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.259735  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260211  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.260237  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260436  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.260625  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260784  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260899  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.261057  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.261307  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.261321  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:13:35.360859  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.360890  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:13:35.360902  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.364454  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.364848  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.364904  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.365213  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.365431  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365607  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365742  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.365932  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.366116  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.366130  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:13:35.470987  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:13:35.471094  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:13:35.471109  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:13:35.471120  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471399  150723 buildroot.go:166] provisioning hostname "ha-928358-m03"
	I1028 11:13:35.471424  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471622  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.474085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474509  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.474542  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474681  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.474871  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475021  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475156  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.475305  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.475494  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.475510  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m03 && echo "ha-928358-m03" | sudo tee /etc/hostname
	I1028 11:13:35.593400  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m03
	
	I1028 11:13:35.593429  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.596415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596740  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.596767  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596962  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.597183  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597361  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597490  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.597704  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.597875  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.597892  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:13:35.715751  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.715791  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:13:35.715811  150723 buildroot.go:174] setting up certificates
	I1028 11:13:35.715821  150723 provision.go:84] configureAuth start
	I1028 11:13:35.715834  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.716106  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:35.718868  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719187  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.719219  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719354  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.721477  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721760  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.721790  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721917  150723 provision.go:143] copyHostCerts
	I1028 11:13:35.721979  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722032  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:13:35.722044  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722140  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:13:35.722245  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722278  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:13:35.722289  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722332  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:13:35.722402  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722429  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:13:35.722435  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722459  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:13:35.722531  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m03 san=[127.0.0.1 192.168.39.44 ha-928358-m03 localhost minikube]
	I1028 11:13:35.825404  150723 provision.go:177] copyRemoteCerts
	I1028 11:13:35.825459  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:13:35.825483  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.828415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.828803  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828972  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.829151  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.829337  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.829485  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:35.913472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:13:35.913575  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:13:35.940828  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:13:35.940904  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:13:35.968009  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:13:35.968078  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:13:35.997592  150723 provision.go:87] duration metric: took 281.755193ms to configureAuth
	I1028 11:13:35.997618  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:13:35.997801  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:35.997869  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.000450  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.000935  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.000970  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.001165  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.001385  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001734  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.001893  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.002062  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.002076  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:13:36.221329  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:13:36.221364  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:13:36.221433  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetURL
	I1028 11:13:36.222571  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using libvirt version 6000000
	I1028 11:13:36.224781  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225156  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.225179  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225329  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:13:36.225344  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:13:36.225353  150723 client.go:171] duration metric: took 23.895703285s to LocalClient.Create
	I1028 11:13:36.225379  150723 start.go:167] duration metric: took 23.895771231s to libmachine.API.Create "ha-928358"
	I1028 11:13:36.225390  150723 start.go:293] postStartSetup for "ha-928358-m03" (driver="kvm2")
	I1028 11:13:36.225399  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:13:36.225413  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.225669  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:13:36.225696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.227681  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.227995  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.228023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.228147  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.228314  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.228474  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.228601  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.313594  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:13:36.318443  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:13:36.318477  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:13:36.318544  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:13:36.318614  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:13:36.318624  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:13:36.318705  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:13:36.330227  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:36.357995  150723 start.go:296] duration metric: took 132.588764ms for postStartSetup
	I1028 11:13:36.358059  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:36.358728  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.361773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362238  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.362267  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362589  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:36.362828  150723 start.go:128] duration metric: took 24.052057424s to createHost
	I1028 11:13:36.362855  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.365684  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.365985  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.366016  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.366211  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.366426  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.366842  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.367055  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.367079  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:13:36.470814  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114016.442636655
	
	I1028 11:13:36.470843  150723 fix.go:216] guest clock: 1730114016.442636655
	I1028 11:13:36.470853  150723 fix.go:229] Guest: 2024-10-28 11:13:36.442636655 +0000 UTC Remote: 2024-10-28 11:13:36.362843133 +0000 UTC m=+156.939582341 (delta=79.793522ms)
	I1028 11:13:36.470869  150723 fix.go:200] guest clock delta is within tolerance: 79.793522ms
	I1028 11:13:36.470874  150723 start.go:83] releasing machines lock for "ha-928358-m03", held for 24.160222671s
	I1028 11:13:36.470894  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.471174  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.473802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.474314  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.474345  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.476703  150723 out.go:177] * Found network options:
	I1028 11:13:36.478253  150723 out.go:177]   - NO_PROXY=192.168.39.206,192.168.39.15
	W1028 11:13:36.479492  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.479516  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.479532  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480171  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480372  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480474  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:13:36.480516  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	W1028 11:13:36.480627  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.480648  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.480710  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:13:36.480733  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.483390  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483597  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.483836  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483976  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484137  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484152  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.484171  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.484240  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484323  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484392  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.484441  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484542  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484643  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.722609  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:13:36.728895  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:13:36.728959  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:13:36.746783  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:13:36.746814  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:13:36.746889  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:13:36.764176  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:13:36.780539  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:13:36.780611  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:13:36.795323  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:13:36.811733  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:13:36.943649  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:13:37.116480  150723 docker.go:233] disabling docker service ...
	I1028 11:13:37.116541  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:13:37.131848  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:13:37.146207  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:13:37.271760  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:13:37.397315  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:13:37.413150  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:13:37.433193  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:13:37.433274  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.448784  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:13:37.448861  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.461820  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.474878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.487273  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:13:37.500384  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.513109  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.533296  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.546472  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:13:37.557495  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:13:37.557598  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:13:37.573136  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:13:37.584661  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:37.701023  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:13:37.798120  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:13:37.798207  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:13:37.803954  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:13:37.804021  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:13:37.808938  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:13:37.851814  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:13:37.851905  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.881347  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.916129  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:13:37.917503  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:13:37.918841  150723 out.go:177]   - env NO_PROXY=192.168.39.206,192.168.39.15
	I1028 11:13:37.920060  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:37.923080  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923530  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:37.923560  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923801  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:13:37.928489  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:37.944276  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:13:37.944540  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:37.944876  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.944917  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.960868  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I1028 11:13:37.961448  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.961978  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.962000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.962320  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.962554  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:13:37.964176  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:37.964500  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.964546  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.980099  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I1028 11:13:37.980536  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.980994  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.981027  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.981316  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.981476  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:37.981636  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.44
	I1028 11:13:37.981649  150723 certs.go:194] generating shared ca certs ...
	I1028 11:13:37.981667  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:37.981815  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:13:37.981867  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:13:37.981880  150723 certs.go:256] generating profile certs ...
	I1028 11:13:37.981981  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:13:37.982024  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408
	I1028 11:13:37.982045  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.44 192.168.39.254]
	I1028 11:13:38.031818  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 ...
	I1028 11:13:38.031849  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408: {Name:mk24630c498d89b32162095507c0812c854412bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032046  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 ...
	I1028 11:13:38.032062  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408: {Name:mk38f2fd390923bb1dfc386b88fc31f22cbd1405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032164  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:13:38.032326  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:13:38.032501  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:13:38.032524  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:13:38.032548  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:13:38.032568  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:13:38.032585  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:13:38.032605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:13:38.032622  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:13:38.032641  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:13:38.045605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:13:38.045699  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:13:38.045758  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:13:38.045774  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:13:38.045809  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:13:38.045836  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:13:38.045857  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:13:38.045912  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:38.045950  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.045974  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.045992  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.046044  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:38.049011  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049464  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:38.049485  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049679  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:38.049889  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:38.050031  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:38.050163  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:38.129875  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:13:38.135272  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:13:38.146812  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:13:38.151195  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:13:38.162579  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:13:38.167018  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:13:38.178835  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:13:38.183162  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:13:38.195172  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:13:38.199929  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:13:38.212017  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:13:38.216559  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:13:38.228337  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:13:38.256831  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:13:38.282349  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:13:38.312381  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:13:38.340368  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:13:38.368852  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:13:38.396585  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:13:38.425195  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:13:38.453101  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:13:38.479115  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:13:38.505463  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:13:38.531445  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:13:38.550676  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:13:38.570134  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:13:38.588413  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:13:38.606756  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:13:38.626726  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:13:38.646275  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:13:38.665976  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:13:38.672176  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:13:38.685017  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690136  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690209  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.697711  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:13:38.712239  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:13:38.725832  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730869  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730941  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.737271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:13:38.751047  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:13:38.763980  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769518  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769615  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.776609  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:13:38.791196  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:13:38.796201  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:13:38.796261  150723 kubeadm.go:934] updating node {m03 192.168.39.44 8443 v1.31.2 crio true true} ...
	I1028 11:13:38.796362  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:13:38.796397  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:13:38.796470  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:13:38.817160  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:13:38.817224  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:13:38.817279  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.829712  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:13:38.829765  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.842596  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:13:38.842645  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:13:38.842708  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842755  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:13:38.842821  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.842886  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.849835  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:13:38.849867  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:13:38.850062  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:13:38.850096  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:13:38.869860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:38.870019  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:39.008547  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:13:39.008597  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:13:39.841044  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:13:39.851424  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:13:39.870537  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:13:39.890208  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:13:39.908650  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:13:39.913130  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:39.926430  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:40.057322  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:13:40.076284  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:40.076669  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:40.076716  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:40.094065  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I1028 11:13:40.094505  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:40.095080  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:40.095109  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:40.095526  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:40.095722  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:40.095896  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:13:40.096063  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:13:40.096090  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:40.099282  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.099834  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:40.099865  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.100013  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:40.100216  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:40.100410  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:40.100563  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:40.273359  150723 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:40.273397  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443"
	I1028 11:14:04.540358  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443": (24.266932187s)
	I1028 11:14:04.540403  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:14:05.110298  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m03 minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:14:05.258236  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:14:05.400029  150723 start.go:319] duration metric: took 25.304126551s to joinCluster
	I1028 11:14:05.400118  150723 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:14:05.400571  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:14:05.401586  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:14:05.403593  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:14:05.647217  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:14:05.664862  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:14:05.665098  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:14:05.665166  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:14:05.665399  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:05.665469  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:05.665476  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:05.665484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:05.665490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:05.669744  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.165968  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.165997  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.166009  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.166016  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.170123  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.666317  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.666416  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.666445  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.666462  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.670843  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:07.165728  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.165755  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.165768  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.165776  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.169304  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.666123  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.666154  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.666165  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.666171  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.669713  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.670892  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:08.166009  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.166031  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.166039  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.166043  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.169692  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:08.666389  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.666423  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.666436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.666446  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.671535  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:09.166494  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.166518  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.166530  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.166537  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.170858  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.665722  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.665745  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.665753  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.665762  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.670170  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.671084  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:10.165695  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.165724  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.165735  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.165742  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.173147  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:10.666401  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.666429  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.666440  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.666443  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.671830  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:11.165701  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.165722  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.165731  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.165737  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.228148  150723 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I1028 11:14:11.666333  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.666388  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.666401  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.666408  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.670186  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:11.671264  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:12.165684  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.165709  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.165715  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.165719  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.170052  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:12.666466  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.666494  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.666504  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.666509  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.670352  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:13.166382  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.166410  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.166421  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.166427  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.171235  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:13.666623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.666647  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.666656  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.666661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.670621  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.165740  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.165767  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.165783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.169178  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.170214  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:14.666184  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.666206  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.666215  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.666219  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.670466  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:15.166232  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.166261  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.166272  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.166276  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.173444  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:15.666306  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.666344  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.666348  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.670385  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:16.166429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.166461  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.166474  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.166481  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.170181  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:16.170699  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:16.665698  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.665723  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.665730  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.665734  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.669776  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:17.165640  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.165664  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.165672  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.165676  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.169368  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:17.666177  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.666202  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.666210  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.666214  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.670134  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.165917  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.165940  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.165948  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.165952  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.169496  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.665925  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.665949  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.665971  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.665976  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.669433  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.670970  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:19.165694  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.165718  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.165728  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.165732  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.170437  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:19.666095  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.666123  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.666134  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.666141  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.668970  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:20.166291  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.166314  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.166322  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.166326  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.170016  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:20.665789  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.665815  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.665822  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.665827  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.669287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.165826  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.165853  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.165862  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.165868  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.169651  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.170332  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:21.665771  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.665804  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.665816  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.665822  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.669841  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:22.166380  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.166406  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.166414  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.166420  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.169816  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:22.666341  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.666364  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.666372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.666377  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.670923  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.165737  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.165762  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.165771  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.169299  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.665765  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.665789  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.665797  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.665801  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.669697  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.670619  150723 node_ready.go:49] node "ha-928358-m03" has status "Ready":"True"
	I1028 11:14:23.670643  150723 node_ready.go:38] duration metric: took 18.005227415s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:23.670662  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:23.670813  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:23.670845  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.670858  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.670875  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.677257  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:23.683895  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.683990  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:14:23.683999  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.684007  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.684011  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688327  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.688931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.688948  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.688956  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688960  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.691787  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.692523  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.692543  150723 pod_ready.go:82] duration metric: took 8.61912ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692554  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692624  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:14:23.692632  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.692639  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.692645  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.695738  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.696515  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.696533  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.696542  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.696548  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.699472  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.700068  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.700097  150723 pod_ready.go:82] duration metric: took 7.535535ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700107  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700162  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:14:23.700171  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.700178  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.700184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.702917  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.703534  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.703550  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.703559  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.703566  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.706103  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.706650  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.706674  150723 pod_ready.go:82] duration metric: took 6.560031ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706686  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706758  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:14:23.706768  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.706778  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.706785  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.709373  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.710451  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:23.710472  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.710484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.710490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.713376  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.713980  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.714010  150723 pod_ready.go:82] duration metric: took 7.313443ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.714024  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.866359  150723 request.go:632] Waited for 152.224049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866492  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.866504  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.866516  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.871166  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.066273  150723 request.go:632] Waited for 194.358951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066350  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066361  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.066372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.066378  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.070313  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.071003  150723 pod_ready.go:93] pod "etcd-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.071021  150723 pod_ready.go:82] duration metric: took 356.990267ms for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.071039  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.266224  150723 request.go:632] Waited for 195.110039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266290  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.266298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.266303  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.271102  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.466777  150723 request.go:632] Waited for 195.051662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466835  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466840  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.466848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.466857  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.471602  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.472438  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.472458  150723 pod_ready.go:82] duration metric: took 401.411661ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.472468  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.666245  150723 request.go:632] Waited for 193.688569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666321  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.666332  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.666337  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.670192  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.866165  150723 request.go:632] Waited for 195.218003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866225  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866230  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.866237  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.866242  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.869696  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.870520  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.870539  150723 pod_ready.go:82] duration metric: took 398.065091ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.870549  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.066723  150723 request.go:632] Waited for 196.090526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066790  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066796  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.066812  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.066818  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.070840  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:25.266492  150723 request.go:632] Waited for 194.408437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266550  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266555  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.266563  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.266567  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.270440  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.271647  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.271668  150723 pod_ready.go:82] duration metric: took 401.112731ms for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.271677  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.466686  150723 request.go:632] Waited for 194.942796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466776  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466782  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.466791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.466799  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.478807  150723 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:14:25.666227  150723 request.go:632] Waited for 186.359371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666322  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.666346  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.666355  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.669950  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.670691  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.670710  150723 pod_ready.go:82] duration metric: took 399.026254ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.670723  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.866724  150723 request.go:632] Waited for 195.936368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866801  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866807  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.866814  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.866819  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.870640  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.065827  150723 request.go:632] Waited for 194.310294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065907  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065912  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.065920  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.065925  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.069699  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.070459  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.070478  150723 pod_ready.go:82] duration metric: took 399.749253ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.070489  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.266701  150723 request.go:632] Waited for 196.138179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266792  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266809  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.266825  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.266832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.270679  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.466081  150723 request.go:632] Waited for 194.361983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466174  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466182  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.466194  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.466206  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.470252  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:26.470784  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.470804  150723 pod_ready.go:82] duration metric: took 400.309396ms for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.470815  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.665844  150723 request.go:632] Waited for 194.95975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665902  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665925  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.665956  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.665963  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.669385  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.866618  150723 request.go:632] Waited for 196.393847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866674  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866679  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.866687  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.866690  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.870012  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.870701  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.870720  150723 pod_ready.go:82] duration metric: took 399.898606ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.870734  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.065775  150723 request.go:632] Waited for 194.965869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065845  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065850  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.065858  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.065865  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.069945  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.266078  150723 request.go:632] Waited for 195.378208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266154  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266159  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.266167  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.266174  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.269961  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:27.270605  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.270625  150723 pod_ready.go:82] duration metric: took 399.882701ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.270640  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.466435  150723 request.go:632] Waited for 195.719587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466503  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466511  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.466550  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.466562  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.473780  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:27.666214  150723 request.go:632] Waited for 191.347069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666284  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666291  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.666298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.666302  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.670820  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.671554  150723 pod_ready.go:93] pod "kube-proxy-np8x5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.671578  150723 pod_ready.go:82] duration metric: took 400.929643ms for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.671589  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.866741  150723 request.go:632] Waited for 195.08002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866814  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866821  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.866832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.866843  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.870682  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.066337  150723 request.go:632] Waited for 194.812157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066403  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066408  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.066416  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.066420  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.069743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.070462  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.070483  150723 pod_ready.go:82] duration metric: took 398.887712ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.070497  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.265961  150723 request.go:632] Waited for 195.392733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266039  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266047  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.266057  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.266088  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.269740  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.465851  150723 request.go:632] Waited for 195.318291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465937  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.465949  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.465957  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.470812  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:28.471696  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.471720  150723 pod_ready.go:82] duration metric: took 401.210524ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.471733  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.665763  150723 request.go:632] Waited for 193.940561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665854  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665869  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.665877  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.665883  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.669746  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.866768  150723 request.go:632] Waited for 196.382736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866827  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866832  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.866840  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.866844  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.870665  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.871107  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.871125  150723 pod_ready.go:82] duration metric: took 399.382061ms for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.871136  150723 pod_ready.go:39] duration metric: took 5.200463354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:28.871154  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:14:28.871205  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:14:28.894991  150723 api_server.go:72] duration metric: took 23.494825881s to wait for apiserver process to appear ...
	I1028 11:14:28.895029  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:14:28.895053  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:14:28.901769  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:14:28.901850  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:14:28.901857  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.901868  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.901879  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.903049  150723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:14:28.903133  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:14:28.903153  150723 api_server.go:131] duration metric: took 8.11544ms to wait for apiserver health ...
	I1028 11:14:28.903164  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:14:29.066557  150723 request.go:632] Waited for 163.310035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066628  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.066650  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.066657  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.073405  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.079996  150723 system_pods.go:59] 24 kube-system pods found
	I1028 11:14:29.080029  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.080039  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.080043  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.080047  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.080050  150723 system_pods.go:61] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.080053  150723 system_pods.go:61] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.080056  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.080062  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.080065  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.080068  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.080071  150723 system_pods.go:61] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.080075  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.080079  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.080085  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.080089  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.080094  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.080099  150723 system_pods.go:61] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.080103  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.080109  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.080117  150723 system_pods.go:61] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.080124  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.080135  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.080139  150723 system_pods.go:61] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.080142  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.080148  150723 system_pods.go:74] duration metric: took 176.977613ms to wait for pod list to return data ...
	I1028 11:14:29.080159  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:14:29.266599  150723 request.go:632] Waited for 186.363794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266653  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266658  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.266665  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.266669  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.271060  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.271213  150723 default_sa.go:45] found service account: "default"
	I1028 11:14:29.271235  150723 default_sa.go:55] duration metric: took 191.069027ms for default service account to be created ...
	I1028 11:14:29.271247  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:14:29.466315  150723 request.go:632] Waited for 194.981882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466408  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466421  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.466436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.466448  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.472918  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.481266  150723 system_pods.go:86] 24 kube-system pods found
	I1028 11:14:29.481302  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.481308  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.481312  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.481316  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.481320  150723 system_pods.go:89] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.481324  150723 system_pods.go:89] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.481327  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.481330  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.481333  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.481336  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.481339  150723 system_pods.go:89] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.481343  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.481346  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.481350  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.481354  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.481359  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.481362  150723 system_pods.go:89] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.481364  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.481368  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.481372  150723 system_pods.go:89] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.481378  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.481382  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.481388  150723 system_pods.go:89] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.481392  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.481402  150723 system_pods.go:126] duration metric: took 210.146699ms to wait for k8s-apps to be running ...
	I1028 11:14:29.481415  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:14:29.481478  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:14:29.499294  150723 system_svc.go:56] duration metric: took 17.867458ms WaitForService to wait for kubelet
	I1028 11:14:29.499345  150723 kubeadm.go:582] duration metric: took 24.099188581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:14:29.499369  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:14:29.666183  150723 request.go:632] Waited for 166.698659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666244  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666250  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.666258  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.666262  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.670701  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.671840  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671859  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671869  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671873  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671877  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671880  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671883  150723 node_conditions.go:105] duration metric: took 172.509467ms to run NodePressure ...
	I1028 11:14:29.671895  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:14:29.671914  150723 start.go:255] writing updated cluster config ...
	I1028 11:14:29.672186  150723 ssh_runner.go:195] Run: rm -f paused
	I1028 11:14:29.727881  150723 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:14:29.729936  150723 out.go:177] * Done! kubectl is now configured to use "ha-928358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.034192741Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-dnw8z,Uid:9c810197-a557-46ef-b357-7e291a4a7b89,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730114071346550352,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:14:30.733782207Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:84b302cf-9f88-4a96-aa61-c2ca6512e060,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1730113923125936665,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T11:12:02.807323181Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xxxgw,Uid:6a07f06b-45fb-48df-a2a2-11a778f673f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113923125059570,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a07f06b-45fb-48df-a2a2-11a778f673f9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:12:02.805561191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gnm9r,Uid:a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1730113923103629359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:12:02.797315413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&PodSandboxMetadata{Name:kindnet-pq9gp,Uid:2ea8de0e-a664-4adb-aec2-6f98508540c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113910415147879,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:11:50.106439100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&PodSandboxMetadata{Name:kube-proxy-8fxdn,Uid:7b2e1e84-6129-4868-b46b-525da3cdf687,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113910405770392,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:11:50.090649853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&PodSandboxMetadata{Name:etcd-ha-928358,Uid:6c6aafad1b68cb8667c9a27dc935b2f4,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1730113898901764232,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.206:2379,kubernetes.io/config.hash: 6c6aafad1b68cb8667c9a27dc935b2f4,kubernetes.io/config.seen: 2024-10-28T11:11:38.384829455Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-928358,Uid:5ad239d10939bdcd9fa6b3f4d3a18685,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898894235076,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d109
39bdcd9fa6b3f4d3a18685,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.206:8443,kubernetes.io/config.hash: 5ad239d10939bdcd9fa6b3f4d3a18685,kubernetes.io/config.seen: 2024-10-28T11:11:38.384833611Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-928358,Uid:65f0454183202822eaaf9dce289e7ab0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898890390090,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{kubernetes.io/config.hash: 65f0454183202822eaaf9dce289e7ab0,kubernetes.io/config.seen: 2024-10-28T11:11:38.384910523Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2efa4330e0881e7fbc78
ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-928358,Uid:bf3ddb9faad874d83f5a9c68c563fb6b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898884624977,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bf3ddb9faad874d83f5a9c68c563fb6b,kubernetes.io/config.seen: 2024-10-28T11:11:38.384907467Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-928358,Uid:66d5e9725d6fffac64bd660c7f6042f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898864719968,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 66d5e9725d6fffac64bd660c7f6042f6,kubernetes.io/config.seen: 2024-10-28T11:11:38.384909743Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e0a77a67-0731-4084-921c-537b5dd3d1cb name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.035314251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1a8ab93-4699-4c5f-af0b-fa4a0f2361c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.035372995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1a8ab93-4699-4c5f-af0b-fa4a0f2361c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.035589599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1a8ab93-4699-4c5f-af0b-fa4a0f2361c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.040134941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ebc6e4c-cfbb-4db2-bad6-c8c94a8257ce name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.040202384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ebc6e4c-cfbb-4db2-bad6-c8c94a8257ce name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.041893225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=345ed937-ab1c-4426-b0d7-64d4a36c9ae7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.042507857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114293042481237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=345ed937-ab1c-4426-b0d7-64d4a36c9ae7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.043189174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=742a3c50-8c65-4498-9953-49e541295833 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.043287790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=742a3c50-8c65-4498-9953-49e541295833 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.043550286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=742a3c50-8c65-4498-9953-49e541295833 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.083298119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c5896dd-7161-495e-830c-4d560d8fe090 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.083373814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c5896dd-7161-495e-830c-4d560d8fe090 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.085203076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c13e326-5497-4bd5-8a7e-5522f5616801 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.085856334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114293085830034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c13e326-5497-4bd5-8a7e-5522f5616801 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.086467621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ca8aec4-49a1-410b-8895-485d5ffd67e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.086523034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ca8aec4-49a1-410b-8895-485d5ffd67e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.086776394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ca8aec4-49a1-410b-8895-485d5ffd67e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.139298382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31488707-14c4-4134-9b5c-77e807da6b50 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.139751714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31488707-14c4-4134-9b5c-77e807da6b50 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.140919707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df96ba3e-c688-424a-b89f-6e960115373b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.141443318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114293141421018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df96ba3e-c688-424a-b89f-6e960115373b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.142189631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a882efa0-92a8-4b5a-b813-d16674c949f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.142240926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a882efa0-92a8-4b5a-b813-d16674c949f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:13 ha-928358 crio[664]: time="2024-10-28 11:18:13.142749514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a882efa0-92a8-4b5a-b813-d16674c949f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	678eb45e28d22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   6fcf4a6026d95       busybox-7dff88458-dnw8z
	267b822906895       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   554c79cdc22b7       coredns-7c65d6cfc9-gnm9r
	0ec81022134ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b55f959c9e26e       coredns-7c65d6cfc9-xxxgw
	101876df5ba49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   cc9b8c6075292       storage-provisioner
	93fda9ea564e1       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   af0a9858b9f50       kindnet-pq9gp
	6af78d85866c9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f07333184a007       kube-proxy-8fxdn
	b4500f47684e6       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   aef8ad820f733       kube-vip-ha-928358
	a75ab3d16aba2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   841e8a03bb9b3       etcd-ha-928358
	f8221151573cf       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   1975c249cdfee       kube-apiserver-ha-928358
	e735b7e201a7d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2efa4330e0881       kube-controller-manager-ha-928358
	1be8f3556358e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   041b17e002580       kube-scheduler-ha-928358
	
	
	==> coredns [0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962] <==
	[INFO] 10.244.2.2:54221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001644473s
	[INFO] 10.244.2.2:58493 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00055293s
	[INFO] 10.244.1.2:59466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000373197s
	[INFO] 10.244.1.2:59196 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002135371s
	[INFO] 10.244.0.4:48789 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140504s
	[INFO] 10.244.0.4:43613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168237s
	[INFO] 10.244.0.4:38143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.016935286s
	[INFO] 10.244.0.4:39110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177298s
	[INFO] 10.244.2.2:46780 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169863s
	[INFO] 10.244.2.2:56782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002009621s
	[INFO] 10.244.2.2:39525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138628s
	[INFO] 10.244.2.2:53832 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216458s
	[INFO] 10.244.1.2:39727 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226061s
	[INFO] 10.244.1.2:60944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001495416s
	[INFO] 10.244.1.2:36506 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119701s
	[INFO] 10.244.1.2:59657 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001674s
	[INFO] 10.244.0.4:50368 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178977s
	[INFO] 10.244.0.4:47562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089999s
	[INFO] 10.244.1.2:44983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013645s
	[INFO] 10.244.1.2:33581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164661s
	[INFO] 10.244.1.2:39245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099456s
	[INFO] 10.244.0.4:48286 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018935s
	[INFO] 10.244.0.4:33651 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163132s
	[INFO] 10.244.2.2:57361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144876s
	[INFO] 10.244.2.2:38124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021886s
	
	
	==> coredns [267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134] <==
	[INFO] 10.244.0.4:46197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168175s
	[INFO] 10.244.0.4:43404 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138086s
	[INFO] 10.244.2.2:42078 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211245s
	[INFO] 10.244.2.2:43818 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001478975s
	[INFO] 10.244.2.2:36869 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148567s
	[INFO] 10.244.2.2:38696 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110904s
	[INFO] 10.244.1.2:53013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000625096s
	[INFO] 10.244.1.2:57247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002184098s
	[INFO] 10.244.1.2:60298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097712s
	[INFO] 10.244.1.2:42104 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099517s
	[INFO] 10.244.0.4:43344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166235s
	[INFO] 10.244.0.4:39756 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110369s
	[INFO] 10.244.2.2:51568 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132969s
	[INFO] 10.244.2.2:39038 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106245s
	[INFO] 10.244.2.2:36223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090887s
	[INFO] 10.244.2.2:53817 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077711s
	[INFO] 10.244.1.2:45611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112879s
	[INFO] 10.244.0.4:48292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126001s
	[INFO] 10.244.0.4:49134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000314244s
	[INFO] 10.244.2.2:38137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166744s
	[INFO] 10.244.2.2:49391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218881s
	[INFO] 10.244.1.2:58619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152475s
	[INFO] 10.244.1.2:59879 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000283359s
	[INFO] 10.244.1.2:33696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103786s
	[INFO] 10.244.1.2:41150 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120227s
	
	
	==> describe nodes <==
	Name:               ha-928358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:12:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-928358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3063a9eb16b941929fe95ea9deb85942
	  System UUID:                3063a9eb-16b9-4192-9fe9-5ea9deb85942
	  Boot ID:                    4750ce27-a752-459c-82e1-f46d3ba9e4fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dnw8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-gnm9r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-xxxgw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-928358                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m28s
	  kube-system                 kindnet-pq9gp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-928358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-ha-928358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-proxy-8fxdn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-928358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-928358                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m22s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-928358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-928358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-928358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-928358 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	
	
	Name:               ha-928358-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:12:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:15:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-928358-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb0972414207466c8358559557f25b09
	  System UUID:                fb097241-4207-466c-8358-559557f25b09
	  Boot ID:                    69b9f603-4134-42b4-a3f9-eeae845c3c91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx5tk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-928358-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-j4vj5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-928358-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-928358-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-cfhp5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-928358-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-vip-ha-928358-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-928358-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  NodeNotReady             99s                    node-controller  Node ha-928358-m02 status is now: NodeNotReady
	
	
	Name:               ha-928358-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-928358-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf69c3934784b66bc2bf05f458d71ba
	  System UUID:                ebf69c39-3478-4b66-bc2b-f05f458d71ba
	  Boot ID:                    2e5043ad-620d-4233-b866-677c45434de6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8ctp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-928358-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-9k2mz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-928358-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-controller-manager-ha-928358-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-np8x5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-928358-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-928358-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-928358-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	
	
	Name:               ha-928358-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_15_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-928358-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ee6c88b1c8c4fa2aebbfe4047465ead
	  System UUID:                6ee6c88b-1c8c-4fa2-aebb-fe4047465ead
	  Boot ID:                    b70ab214-29c9-4d90-9700-0ff1df9971f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k2ddr       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m4s
	  kube-system                 kube-proxy-fl4b7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node ha-928358-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-928358-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053627] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.945749] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.924544] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.657378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.658005] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.063082] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059947] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.199848] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.133132] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.303491] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.303698] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.055659] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.938074] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +1.148998] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.072047] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087002] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.352589] kauditd_printk_skb: 21 callbacks suppressed
	[Oct28 11:12] kauditd_printk_skb: 38 callbacks suppressed
	[ +49.929447] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854] <==
	{"level":"warn","ts":"2024-10-28T11:18:13.457836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.463708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.479387Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.488904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.497776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.500314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.502975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.507586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.514382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.522558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.528715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.533960Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.537709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.544167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.550493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.557177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.557386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.560902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.566244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.567139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.577147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.584523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.592449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.600439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:13.634614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:13 up 7 min,  0 users,  load average: 0.56, 0.52, 0.28
	Linux ha-928358 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a] <==
	I1028 11:17:42.317978       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:17:52.309469       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:17:52.309528       1 main.go:300] handling current node
	I1028 11:17:52.309550       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:17:52.309558       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:17:52.309929       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:17:52.309971       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:17:52.310797       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:17:52.310848       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:02.315389       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:02.315498       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:02.315666       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:02.315707       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:02.315812       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:02.315836       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:02.315914       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:02.315935       1 main.go:300] handling current node
	I1028 11:18:12.318153       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:12.318184       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:12.318402       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:12.318430       1 main.go:300] handling current node
	I1028 11:18:12.318441       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:12.318446       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:12.318605       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:12.318645       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52] <==
	I1028 11:11:44.249575       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1028 11:11:44.264324       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1028 11:11:44.266721       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 11:11:44.273696       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:11:44.441833       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:11:45.375393       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:11:45.401215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:11:45.422922       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:11:50.040543       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:11:50.160325       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:14:35.737044       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49680: use of closed network connection
	E1028 11:14:35.939412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49710: use of closed network connection
	E1028 11:14:36.137760       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49736: use of closed network connection
	E1028 11:14:36.353242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49742: use of closed network connection
	E1028 11:14:36.573304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49764: use of closed network connection
	E1028 11:14:36.795811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49780: use of closed network connection
	E1028 11:14:36.981176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49798: use of closed network connection
	E1028 11:14:37.177919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49830: use of closed network connection
	E1028 11:14:37.363976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49844: use of closed network connection
	E1028 11:14:37.667823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49884: use of closed network connection
	E1028 11:14:37.860879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49906: use of closed network connection
	E1028 11:14:38.044254       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49922: use of closed network connection
	E1028 11:14:38.230562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49930: use of closed network connection
	E1028 11:14:38.433175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49954: use of closed network connection
	E1028 11:14:38.620514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49974: use of closed network connection
	
	
	==> kube-controller-manager [e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef] <==
	I1028 11:15:02.129745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m03"
	E1028 11:15:09.422518       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8k978 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8k978\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1028 11:15:09.795491       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-928358-m04\" does not exist"
	I1028 11:15:09.833650       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-928358-m04" podCIDRs=["10.244.3.0/24"]
	I1028 11:15:09.833720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:09.833754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.048409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.186481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.510390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.501689       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-928358-m04"
	I1028 11:15:14.502311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.708709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:20.001285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.204169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:15:31.204768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.224821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:34.519983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:40.626763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:16:34.553439       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:16:34.556249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.585375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.698936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.004399ms"
	I1028 11:16:34.699212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.024µs"
	I1028 11:16:35.153194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:39.778629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	
	
	==> kube-proxy [6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:11:50.898284       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:11:50.922359       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E1028 11:11:50.922435       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:11:51.064127       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:11:51.064169       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:11:51.064206       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:11:51.084457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:11:51.088588       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:11:51.088608       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:11:51.098854       1 config.go:199] "Starting service config controller"
	I1028 11:11:51.099108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:11:51.099342       1 config.go:328] "Starting node config controller"
	I1028 11:11:51.099355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:11:51.122226       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:11:51.122243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:11:51.199431       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:11:51.199505       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:11:51.222697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583] <==
	W1028 11:11:43.540244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.540296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.541960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:11:43.542068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.589795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:11:43.589913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.666909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.667067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.681223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:11:43.681426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.721299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:11:43.721931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.811114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.811345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:11:46.351113       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:15:09.905243       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.908212       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1733f64f-2a73-414c-a048-b4ad6b9bd117(kube-system/kindnet-k2ddr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k2ddr"
	E1028 11:15:09.910352       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" pod="kube-system/kindnet-k2ddr"
	I1028 11:15:09.910453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.907070       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.910582       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 48c26642-8d42-43a1-ad06-ba9408499bf8(kube-system/kube-proxy-fl4b7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fl4b7"
	E1028 11:15:09.910623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-fl4b7"
	I1028 11:15:09.910661       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.930971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tswkg" node="ha-928358-m04"
	E1028 11:15:09.931171       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-tswkg"
	
	
	==> kubelet <==
	Oct 28 11:16:45 ha-928358 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:16:45 ha-928358 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:16:45 ha-928358 kubelet[1312]: E1028 11:16:45.513274    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114205512809475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:45 ha-928358 kubelet[1312]: E1028 11:16:45.513333    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114205512809475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.514793    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.515166    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.516628    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.517193    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518657    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518678    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532318    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532805    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534490    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534569    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.349514    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536867    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536910    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539160    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539208    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540899    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540940    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-928358 -n ha-928358
helpers_test.go:261: (dbg) Run:  kubectl --context ha-928358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (6.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.427352626s)
ha_test.go:415: expected profile "ha-928358" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-928358\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-928358\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-928358\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.206\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.15\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.44\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.203\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\"
,\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-928358 -n ha-928358
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 logs -n 25: (1.704943553s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m03_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m04 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp testdata/cp-test.txt                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m03 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-928358 node stop m02 -v=7                                                    | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:10:59
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:10:59.463321  150723 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:10:59.463437  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463447  150723 out.go:358] Setting ErrFile to fd 2...
	I1028 11:10:59.463453  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463619  150723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:10:59.464198  150723 out.go:352] Setting JSON to false
	I1028 11:10:59.465062  150723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3202,"bootTime":1730110657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:10:59.465170  150723 start.go:139] virtualization: kvm guest
	I1028 11:10:59.467541  150723 out.go:177] * [ha-928358] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:10:59.469144  150723 notify.go:220] Checking for updates...
	I1028 11:10:59.469164  150723 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:10:59.470932  150723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:10:59.472579  150723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:10:59.474106  150723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.476022  150723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:10:59.477386  150723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:10:59.478873  150723 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:10:59.515106  150723 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:10:59.516643  150723 start.go:297] selected driver: kvm2
	I1028 11:10:59.516662  150723 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:10:59.516677  150723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:10:59.517412  150723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.517509  150723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:10:59.533665  150723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:10:59.533714  150723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:10:59.533960  150723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:10:59.533991  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:10:59.534033  150723 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:10:59.534056  150723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:10:59.534109  150723 start.go:340] cluster config:
	{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:10:59.534204  150723 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.536334  150723 out.go:177] * Starting "ha-928358" primary control-plane node in "ha-928358" cluster
	I1028 11:10:59.537748  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:10:59.537794  150723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:10:59.537802  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:10:59.537881  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:10:59.537891  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:10:59.538184  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:10:59.538208  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json: {Name:mkb8dad6cb32a1c4cc26cae85e4e9234d9821c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:10:59.538374  150723 start.go:360] acquireMachinesLock for ha-928358: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:10:59.538406  150723 start.go:364] duration metric: took 16.963µs to acquireMachinesLock for "ha-928358"
	I1028 11:10:59.538425  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:10:59.538479  150723 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:10:59.540050  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:10:59.540188  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:59.540238  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:59.555032  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I1028 11:10:59.555455  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:59.555961  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:10:59.556000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:59.556420  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:59.556590  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:10:59.556764  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:10:59.556945  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:10:59.556977  150723 client.go:168] LocalClient.Create starting
	I1028 11:10:59.557015  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:10:59.557068  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557092  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557167  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:10:59.557195  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557226  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557253  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:10:59.557273  150723 main.go:141] libmachine: (ha-928358) Calling .PreCreateCheck
	I1028 11:10:59.557662  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:10:59.558063  150723 main.go:141] libmachine: Creating machine...
	I1028 11:10:59.558080  150723 main.go:141] libmachine: (ha-928358) Calling .Create
	I1028 11:10:59.558226  150723 main.go:141] libmachine: (ha-928358) Creating KVM machine...
	I1028 11:10:59.559811  150723 main.go:141] libmachine: (ha-928358) DBG | found existing default KVM network
	I1028 11:10:59.560481  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.560340  150746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I1028 11:10:59.560504  150723 main.go:141] libmachine: (ha-928358) DBG | created network xml: 
	I1028 11:10:59.560515  150723 main.go:141] libmachine: (ha-928358) DBG | <network>
	I1028 11:10:59.560521  150723 main.go:141] libmachine: (ha-928358) DBG |   <name>mk-ha-928358</name>
	I1028 11:10:59.560530  150723 main.go:141] libmachine: (ha-928358) DBG |   <dns enable='no'/>
	I1028 11:10:59.560536  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560547  150723 main.go:141] libmachine: (ha-928358) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:10:59.560555  150723 main.go:141] libmachine: (ha-928358) DBG |     <dhcp>
	I1028 11:10:59.560564  150723 main.go:141] libmachine: (ha-928358) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:10:59.560572  150723 main.go:141] libmachine: (ha-928358) DBG |     </dhcp>
	I1028 11:10:59.560581  150723 main.go:141] libmachine: (ha-928358) DBG |   </ip>
	I1028 11:10:59.560587  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560595  150723 main.go:141] libmachine: (ha-928358) DBG | </network>
	I1028 11:10:59.560601  150723 main.go:141] libmachine: (ha-928358) DBG | 
	I1028 11:10:59.566260  150723 main.go:141] libmachine: (ha-928358) DBG | trying to create private KVM network mk-ha-928358 192.168.39.0/24...
	I1028 11:10:59.635650  150723 main.go:141] libmachine: (ha-928358) DBG | private KVM network mk-ha-928358 192.168.39.0/24 created
	I1028 11:10:59.635720  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.635608  150746 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.635745  150723 main.go:141] libmachine: (ha-928358) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.635835  150723 main.go:141] libmachine: (ha-928358) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:10:59.635904  150723 main.go:141] libmachine: (ha-928358) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:10:59.913193  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.913037  150746 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa...
	I1028 11:10:59.999912  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999757  150746 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk...
	I1028 11:10:59.999940  150723 main.go:141] libmachine: (ha-928358) DBG | Writing magic tar header
	I1028 11:10:59.999950  150723 main.go:141] libmachine: (ha-928358) DBG | Writing SSH key tar header
	I1028 11:10:59.999957  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999874  150746 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.999966  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358
	I1028 11:11:00.000011  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 (perms=drwx------)
	I1028 11:11:00.000025  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:00.000035  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:00.000055  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:00.000076  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:00.000090  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:00.000108  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:00.000117  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:00.000127  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:00.000138  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home
	I1028 11:11:00.000147  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:00.000160  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:00.000177  150723 main.go:141] libmachine: (ha-928358) DBG | Skipping /home - not owner
	I1028 11:11:00.000190  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:00.001605  150723 main.go:141] libmachine: (ha-928358) define libvirt domain using xml: 
	I1028 11:11:00.001643  150723 main.go:141] libmachine: (ha-928358) <domain type='kvm'>
	I1028 11:11:00.001657  150723 main.go:141] libmachine: (ha-928358)   <name>ha-928358</name>
	I1028 11:11:00.001672  150723 main.go:141] libmachine: (ha-928358)   <memory unit='MiB'>2200</memory>
	I1028 11:11:00.001685  150723 main.go:141] libmachine: (ha-928358)   <vcpu>2</vcpu>
	I1028 11:11:00.001693  150723 main.go:141] libmachine: (ha-928358)   <features>
	I1028 11:11:00.001703  150723 main.go:141] libmachine: (ha-928358)     <acpi/>
	I1028 11:11:00.001711  150723 main.go:141] libmachine: (ha-928358)     <apic/>
	I1028 11:11:00.001724  150723 main.go:141] libmachine: (ha-928358)     <pae/>
	I1028 11:11:00.001748  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.001760  150723 main.go:141] libmachine: (ha-928358)   </features>
	I1028 11:11:00.001770  150723 main.go:141] libmachine: (ha-928358)   <cpu mode='host-passthrough'>
	I1028 11:11:00.001783  150723 main.go:141] libmachine: (ha-928358)   
	I1028 11:11:00.001795  150723 main.go:141] libmachine: (ha-928358)   </cpu>
	I1028 11:11:00.001806  150723 main.go:141] libmachine: (ha-928358)   <os>
	I1028 11:11:00.001820  150723 main.go:141] libmachine: (ha-928358)     <type>hvm</type>
	I1028 11:11:00.001839  150723 main.go:141] libmachine: (ha-928358)     <boot dev='cdrom'/>
	I1028 11:11:00.001851  150723 main.go:141] libmachine: (ha-928358)     <boot dev='hd'/>
	I1028 11:11:00.001863  150723 main.go:141] libmachine: (ha-928358)     <bootmenu enable='no'/>
	I1028 11:11:00.001872  150723 main.go:141] libmachine: (ha-928358)   </os>
	I1028 11:11:00.001884  150723 main.go:141] libmachine: (ha-928358)   <devices>
	I1028 11:11:00.001898  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='cdrom'>
	I1028 11:11:00.001919  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/boot2docker.iso'/>
	I1028 11:11:00.001933  150723 main.go:141] libmachine: (ha-928358)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:00.001968  150723 main.go:141] libmachine: (ha-928358)       <readonly/>
	I1028 11:11:00.001991  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002008  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='disk'>
	I1028 11:11:00.002023  150723 main.go:141] libmachine: (ha-928358)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:00.002044  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk'/>
	I1028 11:11:00.002058  150723 main.go:141] libmachine: (ha-928358)       <target dev='hda' bus='virtio'/>
	I1028 11:11:00.002070  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002106  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002133  150723 main.go:141] libmachine: (ha-928358)       <source network='mk-ha-928358'/>
	I1028 11:11:00.002148  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002159  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002172  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002179  150723 main.go:141] libmachine: (ha-928358)       <source network='default'/>
	I1028 11:11:00.002190  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002197  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002206  150723 main.go:141] libmachine: (ha-928358)     <serial type='pty'>
	I1028 11:11:00.002210  150723 main.go:141] libmachine: (ha-928358)       <target port='0'/>
	I1028 11:11:00.002216  150723 main.go:141] libmachine: (ha-928358)     </serial>
	I1028 11:11:00.002226  150723 main.go:141] libmachine: (ha-928358)     <console type='pty'>
	I1028 11:11:00.002250  150723 main.go:141] libmachine: (ha-928358)       <target type='serial' port='0'/>
	I1028 11:11:00.002282  150723 main.go:141] libmachine: (ha-928358)     </console>
	I1028 11:11:00.002291  150723 main.go:141] libmachine: (ha-928358)     <rng model='virtio'>
	I1028 11:11:00.002297  150723 main.go:141] libmachine: (ha-928358)       <backend model='random'>/dev/random</backend>
	I1028 11:11:00.002303  150723 main.go:141] libmachine: (ha-928358)     </rng>
	I1028 11:11:00.002306  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002311  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002318  150723 main.go:141] libmachine: (ha-928358)   </devices>
	I1028 11:11:00.002323  150723 main.go:141] libmachine: (ha-928358) </domain>
	I1028 11:11:00.002328  150723 main.go:141] libmachine: (ha-928358) 
	I1028 11:11:00.006810  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:30:04:d3 in network default
	I1028 11:11:00.007391  150723 main.go:141] libmachine: (ha-928358) Ensuring networks are active...
	I1028 11:11:00.007412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:00.008229  150723 main.go:141] libmachine: (ha-928358) Ensuring network default is active
	I1028 11:11:00.008655  150723 main.go:141] libmachine: (ha-928358) Ensuring network mk-ha-928358 is active
	I1028 11:11:00.009320  150723 main.go:141] libmachine: (ha-928358) Getting domain xml...
	I1028 11:11:00.010062  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:01.218137  150723 main.go:141] libmachine: (ha-928358) Waiting to get IP...
	I1028 11:11:01.218922  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.219337  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.219385  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.219330  150746 retry.go:31] will retry after 310.252899ms: waiting for machine to come up
	I1028 11:11:01.530950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.531414  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.531437  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.531371  150746 retry.go:31] will retry after 282.464528ms: waiting for machine to come up
	I1028 11:11:01.815720  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.816159  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.816184  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.816121  150746 retry.go:31] will retry after 304.583775ms: waiting for machine to come up
	I1028 11:11:02.122718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.123224  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.123251  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.123154  150746 retry.go:31] will retry after 442.531578ms: waiting for machine to come up
	I1028 11:11:02.566777  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.567197  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.567222  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.567162  150746 retry.go:31] will retry after 677.799642ms: waiting for machine to come up
	I1028 11:11:03.246160  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.246663  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.246691  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.246611  150746 retry.go:31] will retry after 661.382392ms: waiting for machine to come up
	I1028 11:11:03.909443  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.909955  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.910006  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.909898  150746 retry.go:31] will retry after 1.086932803s: waiting for machine to come up
	I1028 11:11:04.997802  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:04.998295  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:04.998322  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:04.998231  150746 retry.go:31] will retry after 1.028978753s: waiting for machine to come up
	I1028 11:11:06.028312  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:06.028699  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:06.028724  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:06.028658  150746 retry.go:31] will retry after 1.229241603s: waiting for machine to come up
	I1028 11:11:07.259043  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:07.259415  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:07.259442  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:07.259356  150746 retry.go:31] will retry after 1.621101278s: waiting for machine to come up
	I1028 11:11:08.882760  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:08.883130  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:08.883166  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:08.883106  150746 retry.go:31] will retry after 2.010099388s: waiting for machine to come up
	I1028 11:11:10.894594  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:10.895005  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:10.895028  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:10.894965  150746 retry.go:31] will retry after 2.268994964s: waiting for machine to come up
	I1028 11:11:13.166469  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:13.166906  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:13.166930  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:13.166853  150746 retry.go:31] will retry after 2.964491157s: waiting for machine to come up
	I1028 11:11:16.134568  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:16.135014  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:16.135030  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:16.134978  150746 retry.go:31] will retry after 3.669669561s: waiting for machine to come up
	I1028 11:11:19.805844  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:19.806451  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:19.806483  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:19.806402  150746 retry.go:31] will retry after 6.986761695s: waiting for machine to come up
	I1028 11:11:26.796618  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797199  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has current primary IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797228  150723 main.go:141] libmachine: (ha-928358) Found IP for machine: 192.168.39.206
	I1028 11:11:26.797258  150723 main.go:141] libmachine: (ha-928358) Reserving static IP address...
	I1028 11:11:26.797624  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find host DHCP lease matching {name: "ha-928358", mac: "52:54:00:dd:b2:b7", ip: "192.168.39.206"} in network mk-ha-928358
	I1028 11:11:26.873582  150723 main.go:141] libmachine: (ha-928358) Reserved static IP address: 192.168.39.206
	I1028 11:11:26.873609  150723 main.go:141] libmachine: (ha-928358) Waiting for SSH to be available...
	I1028 11:11:26.873619  150723 main.go:141] libmachine: (ha-928358) DBG | Getting to WaitForSSH function...
	I1028 11:11:26.876283  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876750  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:26.876781  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876886  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH client type: external
	I1028 11:11:26.876901  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa (-rw-------)
	I1028 11:11:26.876929  150723 main.go:141] libmachine: (ha-928358) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:11:26.876941  150723 main.go:141] libmachine: (ha-928358) DBG | About to run SSH command:
	I1028 11:11:26.876952  150723 main.go:141] libmachine: (ha-928358) DBG | exit 0
	I1028 11:11:27.009708  150723 main.go:141] libmachine: (ha-928358) DBG | SSH cmd err, output: <nil>: 
	I1028 11:11:27.010071  150723 main.go:141] libmachine: (ha-928358) KVM machine creation complete!
	I1028 11:11:27.010352  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:27.010925  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011146  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011301  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:11:27.011311  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:27.012679  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:11:27.012693  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:11:27.012699  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:11:27.012704  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.014867  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015214  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.015263  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015327  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.015507  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015644  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015739  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.015911  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.016106  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.016117  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:11:27.128876  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.128903  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:11:27.128915  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.131646  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132081  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.132109  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132331  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.132525  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132697  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.133070  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.133229  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.133242  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:11:27.250569  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:11:27.250647  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:11:27.250657  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:11:27.250664  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.250929  150723 buildroot.go:166] provisioning hostname "ha-928358"
	I1028 11:11:27.250971  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.251130  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.253765  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254120  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.254146  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254297  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.254451  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254601  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254758  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.254909  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.255102  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.255118  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358 && echo "ha-928358" | sudo tee /etc/hostname
	I1028 11:11:27.384932  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:11:27.384962  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.387904  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388215  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.388243  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388516  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.388719  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.388884  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.389002  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.389152  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.389334  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.389355  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:11:27.516473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.516502  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:11:27.516519  150723 buildroot.go:174] setting up certificates
	I1028 11:11:27.516529  150723 provision.go:84] configureAuth start
	I1028 11:11:27.516537  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.516866  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:27.519682  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520053  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.520077  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520298  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.522648  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.522984  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.523022  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.523127  150723 provision.go:143] copyHostCerts
	I1028 11:11:27.523161  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523220  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:11:27.523235  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523317  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:11:27.523418  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523442  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:11:27.523451  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523494  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:11:27.523565  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523591  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:11:27.523600  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523634  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:11:27.523699  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358 san=[127.0.0.1 192.168.39.206 ha-928358 localhost minikube]
	I1028 11:11:27.652184  150723 provision.go:177] copyRemoteCerts
	I1028 11:11:27.652239  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:11:27.652263  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.655247  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655509  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.655537  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655747  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.655942  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.656141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.656367  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:27.747959  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:11:27.748026  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:11:27.773785  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:11:27.773875  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 11:11:27.798172  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:11:27.798246  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:11:27.823795  150723 provision.go:87] duration metric: took 307.251687ms to configureAuth
	I1028 11:11:27.823824  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:11:27.823999  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:27.824098  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.826733  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827058  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.827095  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827231  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.827430  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827593  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827720  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.827882  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.828064  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.828082  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:11:28.063521  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:11:28.063544  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:11:28.063563  150723 main.go:141] libmachine: (ha-928358) Calling .GetURL
	I1028 11:11:28.064889  150723 main.go:141] libmachine: (ha-928358) DBG | Using libvirt version 6000000
	I1028 11:11:28.067440  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.067909  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.067936  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.068169  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:11:28.068184  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:11:28.068190  150723 client.go:171] duration metric: took 28.511205055s to LocalClient.Create
	I1028 11:11:28.068213  150723 start.go:167] duration metric: took 28.511273119s to libmachine.API.Create "ha-928358"
	I1028 11:11:28.068224  150723 start.go:293] postStartSetup for "ha-928358" (driver="kvm2")
	I1028 11:11:28.068234  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:11:28.068250  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.068499  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:11:28.068524  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.070718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071018  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.071047  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071207  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.071391  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.071596  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.071768  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.160093  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:11:28.164580  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:11:28.164611  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:11:28.164677  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:11:28.164753  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:11:28.164768  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:11:28.164860  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:11:28.174780  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:28.200051  150723 start.go:296] duration metric: took 131.810016ms for postStartSetup
	I1028 11:11:28.200113  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:28.200681  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.203634  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204015  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.204039  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204248  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:28.204459  150723 start.go:128] duration metric: took 28.665968765s to createHost
	I1028 11:11:28.204486  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.206915  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207241  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.207270  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207406  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.207565  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207714  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207841  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.207995  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:28.208148  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:28.208158  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:11:28.326642  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113888.306870077
	
	I1028 11:11:28.326664  150723 fix.go:216] guest clock: 1730113888.306870077
	I1028 11:11:28.326674  150723 fix.go:229] Guest: 2024-10-28 11:11:28.306870077 +0000 UTC Remote: 2024-10-28 11:11:28.204471945 +0000 UTC m=+28.781211208 (delta=102.398132ms)
	I1028 11:11:28.326699  150723 fix.go:200] guest clock delta is within tolerance: 102.398132ms
	I1028 11:11:28.326706  150723 start.go:83] releasing machines lock for "ha-928358", held for 28.788289196s
	I1028 11:11:28.326726  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.327001  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.329581  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.329968  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.330003  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.330168  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330728  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330884  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330998  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:11:28.331060  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.331115  150723 ssh_runner.go:195] Run: cat /version.json
	I1028 11:11:28.331141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.333639  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.333966  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.333994  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334015  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334246  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334387  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.334412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334416  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334585  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334627  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.334755  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334771  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.334927  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.335084  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.419255  150723 ssh_runner.go:195] Run: systemctl --version
	I1028 11:11:28.450377  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:11:28.614960  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:11:28.621690  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:11:28.621762  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:11:28.640026  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:11:28.640058  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:11:28.640161  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:11:28.657821  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:11:28.673308  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:11:28.673372  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:11:28.688651  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:11:28.704016  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:11:28.829012  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:11:28.990202  150723 docker.go:233] disabling docker service ...
	I1028 11:11:28.990264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:11:29.006016  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:11:29.019798  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:11:29.148701  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:11:29.286836  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:11:29.301306  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:11:29.321180  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:11:29.321242  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.332417  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:11:29.332516  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.344116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.355229  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.366386  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:11:29.377683  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.388680  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.406712  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.418602  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:11:29.428422  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:11:29.428489  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:11:29.442860  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:11:29.453466  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:29.587618  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:11:29.702292  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:11:29.702379  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:11:29.708037  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:11:29.708101  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:11:29.712169  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:11:29.760681  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:11:29.760781  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.793958  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.827829  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:11:29.829108  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:29.831950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832308  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:29.832337  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832530  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:11:29.837077  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:29.850764  150723 kubeadm.go:883] updating cluster {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:11:29.850982  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:29.851067  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:29.884186  150723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:11:29.884257  150723 ssh_runner.go:195] Run: which lz4
	I1028 11:11:29.888297  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:11:29.888406  150723 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:11:29.892595  150723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:11:29.892630  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:11:31.364550  150723 crio.go:462] duration metric: took 1.47616531s to copy over tarball
	I1028 11:11:31.364646  150723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:11:33.492729  150723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128048416s)
	I1028 11:11:33.492765  150723 crio.go:469] duration metric: took 2.12817379s to extract the tarball
	I1028 11:11:33.492775  150723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:11:33.530789  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:33.576388  150723 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:11:33.576418  150723 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:11:33.576428  150723 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1028 11:11:33.576525  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:11:33.576597  150723 ssh_runner.go:195] Run: crio config
	I1028 11:11:33.628433  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:33.628457  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:33.628468  150723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:11:33.628490  150723 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-928358 NodeName:ha-928358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:11:33.628623  150723 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-928358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:11:33.628649  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:11:33.628693  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:11:33.645502  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:11:33.645637  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:11:33.645712  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:11:33.657169  150723 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:11:33.657234  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:11:33.668705  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:11:33.687712  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:11:33.707287  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:11:33.725968  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:11:33.745306  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:11:33.749954  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:33.764379  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:33.885154  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:11:33.902745  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.206
	I1028 11:11:33.902769  150723 certs.go:194] generating shared ca certs ...
	I1028 11:11:33.902784  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:33.902965  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:11:33.903024  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:11:33.903039  150723 certs.go:256] generating profile certs ...
	I1028 11:11:33.903106  150723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:11:33.903126  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt with IP's: []
	I1028 11:11:34.090717  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt ...
	I1028 11:11:34.090747  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt: {Name:mk3976b6be27fc4f31aa39dbf48c0afa90955478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.090957  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key ...
	I1028 11:11:34.090981  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key: {Name:mk302db81268b764894e98d850b90eaaced7a15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.091101  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923
	I1028 11:11:34.091124  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.254]
	I1028 11:11:34.335900  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 ...
	I1028 11:11:34.335935  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923: {Name:mk0008343e6fdd7a08b2d031f0ba617f7a66f590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336144  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 ...
	I1028 11:11:34.336163  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923: {Name:mkd6c56ea43ae5fd58d0e46e3c3070e385813140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336286  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:11:34.336450  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:11:34.336537  150723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:11:34.336559  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt with IP's: []
	I1028 11:11:34.464000  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt ...
	I1028 11:11:34.464029  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt: {Name:mkb9ddbbbcf10a07648ff0910f8f6f99edd94a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464231  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key ...
	I1028 11:11:34.464247  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key: {Name:mk17d0ad23ae67dc57b4cfd6ae702fbcda30c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464343  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:11:34.464369  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:11:34.464389  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:11:34.464407  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:11:34.464422  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:11:34.464435  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:11:34.464453  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:11:34.464472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:11:34.464549  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:11:34.464601  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:11:34.464617  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:11:34.464647  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:11:34.464682  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:11:34.464714  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:11:34.464766  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:34.464809  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.464829  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.464844  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.465667  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:11:34.492761  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:11:34.519090  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:11:34.544886  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:11:34.571307  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:11:34.596836  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:11:34.622460  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:11:34.648376  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:11:34.677988  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:11:34.708308  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:11:34.732512  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:11:34.757152  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:11:34.774559  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:11:34.780665  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:11:34.792209  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797675  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797733  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.804182  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:11:34.816617  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:11:34.829067  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834000  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834062  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.840080  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:11:34.851913  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:11:34.863842  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868862  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868942  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.875065  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:11:34.888703  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:11:34.893205  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:11:34.893271  150723 kubeadm.go:392] StartCluster: {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:11:34.893354  150723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:11:34.893425  150723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:11:34.932903  150723 cri.go:89] found id: ""
	I1028 11:11:34.932974  150723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:11:34.944526  150723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:11:34.956312  150723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:11:34.967457  150723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:11:34.967484  150723 kubeadm.go:157] found existing configuration files:
	
	I1028 11:11:34.967537  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:11:34.977810  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:11:34.977875  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:11:34.988232  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:11:34.998184  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:11:34.998247  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:11:35.008728  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.018729  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:11:35.018793  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.029800  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:11:35.040304  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:11:35.040357  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:11:35.050830  150723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:11:35.164435  150723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:11:35.164499  150723 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:11:35.281374  150723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:11:35.281556  150723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:11:35.281686  150723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:11:35.294386  150723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:11:35.479371  150723 out.go:235]   - Generating certificates and keys ...
	I1028 11:11:35.479512  150723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:11:35.479602  150723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:11:35.531977  150723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:11:35.706199  150723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:11:35.805605  150723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:11:35.955545  150723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:11:36.024313  150723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:11:36.024446  150723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.166366  150723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:11:36.166553  150723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.477451  150723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:11:36.529937  150723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:11:36.764928  150723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:11:36.765199  150723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:11:36.958542  150723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:11:37.098519  150723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:11:37.432447  150723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:11:37.510265  150723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:11:37.727523  150723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:11:37.728159  150723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:11:37.734975  150723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:11:37.736761  150723 out.go:235]   - Booting up control plane ...
	I1028 11:11:37.736891  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:11:37.737036  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:11:37.737392  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:11:37.761460  150723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:11:37.769245  150723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:11:37.769327  150723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:11:37.901440  150723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:11:37.901605  150723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:11:38.403804  150723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.460314ms
	I1028 11:11:38.403927  150723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:11:44.555956  150723 kubeadm.go:310] [api-check] The API server is healthy after 6.1544774s
	I1028 11:11:44.584149  150723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:11:44.607891  150723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:11:44.647415  150723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:11:44.647602  150723 kubeadm.go:310] [mark-control-plane] Marking the node ha-928358 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:11:44.670940  150723 kubeadm.go:310] [bootstrap-token] Using token: 7u74ui.ti422fa98pbd45zp
	I1028 11:11:44.672724  150723 out.go:235]   - Configuring RBAC rules ...
	I1028 11:11:44.672861  150723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:11:44.681325  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:11:44.701467  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:11:44.720481  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:11:44.731591  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:11:44.743611  150723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:11:44.968060  150723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:11:45.411017  150723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:11:45.970736  150723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:11:45.970791  150723 kubeadm.go:310] 
	I1028 11:11:45.970885  150723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:11:45.970911  150723 kubeadm.go:310] 
	I1028 11:11:45.971033  150723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:11:45.971045  150723 kubeadm.go:310] 
	I1028 11:11:45.971081  150723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:11:45.971155  150723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:11:45.971234  150723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:11:45.971246  150723 kubeadm.go:310] 
	I1028 11:11:45.971327  150723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:11:45.971346  150723 kubeadm.go:310] 
	I1028 11:11:45.971421  150723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:11:45.971432  150723 kubeadm.go:310] 
	I1028 11:11:45.971526  150723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:11:45.971668  150723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:11:45.971782  150723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:11:45.971802  150723 kubeadm.go:310] 
	I1028 11:11:45.971912  150723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:11:45.972050  150723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:11:45.972078  150723 kubeadm.go:310] 
	I1028 11:11:45.972201  150723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972360  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 11:11:45.972397  150723 kubeadm.go:310] 	--control-plane 
	I1028 11:11:45.972407  150723 kubeadm.go:310] 
	I1028 11:11:45.972546  150723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:11:45.972563  150723 kubeadm.go:310] 
	I1028 11:11:45.972685  150723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972831  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 11:11:45.973046  150723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:11:45.973098  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:45.973115  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:45.975136  150723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:11:45.976845  150723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:11:45.982665  150723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:11:45.982687  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:11:46.004414  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:11:46.391016  150723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:11:46.391108  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.391153  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358 minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=true
	I1028 11:11:46.556219  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.556239  150723 ops.go:34] apiserver oom_adj: -16
	I1028 11:11:47.056803  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:47.556401  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.057031  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.556648  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.056531  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.556278  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.056341  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.557096  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.688176  150723 kubeadm.go:1113] duration metric: took 4.297146148s to wait for elevateKubeSystemPrivileges
	I1028 11:11:50.688219  150723 kubeadm.go:394] duration metric: took 15.794958001s to StartCluster
	I1028 11:11:50.688240  150723 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.688317  150723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.689020  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.689264  150723 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:50.689283  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:11:50.689310  150723 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:11:50.689399  150723 addons.go:69] Setting storage-provisioner=true in profile "ha-928358"
	I1028 11:11:50.689294  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:11:50.689432  150723 addons.go:69] Setting default-storageclass=true in profile "ha-928358"
	I1028 11:11:50.689434  150723 addons.go:234] Setting addon storage-provisioner=true in "ha-928358"
	I1028 11:11:50.689444  150723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-928358"
	I1028 11:11:50.689473  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.689502  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:50.689978  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690024  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.690030  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690078  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.705787  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I1028 11:11:50.705799  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1028 11:11:50.706396  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706425  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706943  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.706961  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707116  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.707141  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707344  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707538  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707605  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.708242  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.708286  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.709865  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.710123  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:11:50.710718  150723 addons.go:234] Setting addon default-storageclass=true in "ha-928358"
	I1028 11:11:50.710749  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.710982  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.711007  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.711160  150723 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:11:50.724777  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I1028 11:11:50.725295  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.725751  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I1028 11:11:50.725906  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.725930  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.726287  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.726327  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.726526  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.726809  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.726831  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.727169  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.727730  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.727777  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.728384  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.730334  150723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:11:50.731788  150723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:50.731810  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:11:50.731829  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.735112  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735661  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.735681  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735902  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.736091  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.736234  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.736386  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.743829  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I1028 11:11:50.744355  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.744925  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.744949  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.745276  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.745461  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.747144  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.747358  150723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.747374  150723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:11:50.747388  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.749934  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750358  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.750397  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750503  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.750676  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.750813  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.750942  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.872575  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:11:50.921646  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.984303  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:51.311574  150723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:11:51.359517  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.359546  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.359929  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.359938  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.359978  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.359992  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.360011  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.360266  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.360332  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.360347  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.360405  150723 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:11:51.360435  150723 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:11:51.360539  150723 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:11:51.360552  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.360564  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.360580  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.370574  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:11:51.371224  150723 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:11:51.371242  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.371253  150723 round_trippers.go:473]     Content-Type: application/json
	I1028 11:11:51.371260  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.371264  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.378842  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:11:51.379088  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.379107  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.379391  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.379407  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.723667  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.723697  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724015  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724061  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.724071  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.724078  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724024  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.724319  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724335  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.726167  150723 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 11:11:51.727603  150723 addons.go:510] duration metric: took 1.038296123s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 11:11:51.727646  150723 start.go:246] waiting for cluster config update ...
	I1028 11:11:51.727661  150723 start.go:255] writing updated cluster config ...
	I1028 11:11:51.729506  150723 out.go:201] 
	I1028 11:11:51.731166  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:51.731233  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.732989  150723 out.go:177] * Starting "ha-928358-m02" control-plane node in "ha-928358" cluster
	I1028 11:11:51.734422  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:51.734443  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:11:51.734539  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:11:51.734550  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:11:51.734619  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.734790  150723 start.go:360] acquireMachinesLock for ha-928358-m02: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:11:51.734834  150723 start.go:364] duration metric: took 28.788µs to acquireMachinesLock for "ha-928358-m02"
	I1028 11:11:51.734851  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:51.734918  150723 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:11:51.736531  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:11:51.736608  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:51.736641  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:51.751347  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I1028 11:11:51.751714  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:51.752299  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:51.752328  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:51.752603  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:51.752792  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:11:51.752934  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:11:51.753123  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:11:51.753174  150723 client.go:168] LocalClient.Create starting
	I1028 11:11:51.753215  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:11:51.753263  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753289  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753362  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:11:51.753389  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753404  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753437  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:11:51.753449  150723 main.go:141] libmachine: (ha-928358-m02) Calling .PreCreateCheck
	I1028 11:11:51.753595  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:11:51.754006  150723 main.go:141] libmachine: Creating machine...
	I1028 11:11:51.754022  150723 main.go:141] libmachine: (ha-928358-m02) Calling .Create
	I1028 11:11:51.754205  150723 main.go:141] libmachine: (ha-928358-m02) Creating KVM machine...
	I1028 11:11:51.755415  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing default KVM network
	I1028 11:11:51.755582  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing private KVM network mk-ha-928358
	I1028 11:11:51.755707  150723 main.go:141] libmachine: (ha-928358-m02) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:51.755730  150723 main.go:141] libmachine: (ha-928358-m02) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:11:51.755821  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.755707  151103 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:51.755971  150723 main.go:141] libmachine: (ha-928358-m02) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:11:51.993174  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.993039  151103 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa...
	I1028 11:11:52.383008  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.382864  151103 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk...
	I1028 11:11:52.383053  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing magic tar header
	I1028 11:11:52.383094  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing SSH key tar header
	I1028 11:11:52.383117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.383029  151103 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:52.383167  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02
	I1028 11:11:52.383203  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:52.383214  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 (perms=drwx------)
	I1028 11:11:52.383224  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:52.383237  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:52.383258  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:52.383272  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:52.383295  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:52.383304  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:52.383313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:52.383324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:52.383332  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home
	I1028 11:11:52.383343  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Skipping /home - not owner
	I1028 11:11:52.383370  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:52.383390  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:52.384348  150723 main.go:141] libmachine: (ha-928358-m02) define libvirt domain using xml: 
	I1028 11:11:52.384373  150723 main.go:141] libmachine: (ha-928358-m02) <domain type='kvm'>
	I1028 11:11:52.384400  150723 main.go:141] libmachine: (ha-928358-m02)   <name>ha-928358-m02</name>
	I1028 11:11:52.384412  150723 main.go:141] libmachine: (ha-928358-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:11:52.384426  150723 main.go:141] libmachine: (ha-928358-m02)   <vcpu>2</vcpu>
	I1028 11:11:52.384436  150723 main.go:141] libmachine: (ha-928358-m02)   <features>
	I1028 11:11:52.384457  150723 main.go:141] libmachine: (ha-928358-m02)     <acpi/>
	I1028 11:11:52.384472  150723 main.go:141] libmachine: (ha-928358-m02)     <apic/>
	I1028 11:11:52.384478  150723 main.go:141] libmachine: (ha-928358-m02)     <pae/>
	I1028 11:11:52.384482  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384490  150723 main.go:141] libmachine: (ha-928358-m02)   </features>
	I1028 11:11:52.384494  150723 main.go:141] libmachine: (ha-928358-m02)   <cpu mode='host-passthrough'>
	I1028 11:11:52.384501  150723 main.go:141] libmachine: (ha-928358-m02)   
	I1028 11:11:52.384506  150723 main.go:141] libmachine: (ha-928358-m02)   </cpu>
	I1028 11:11:52.384511  150723 main.go:141] libmachine: (ha-928358-m02)   <os>
	I1028 11:11:52.384516  150723 main.go:141] libmachine: (ha-928358-m02)     <type>hvm</type>
	I1028 11:11:52.384522  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='cdrom'/>
	I1028 11:11:52.384526  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='hd'/>
	I1028 11:11:52.384531  150723 main.go:141] libmachine: (ha-928358-m02)     <bootmenu enable='no'/>
	I1028 11:11:52.384537  150723 main.go:141] libmachine: (ha-928358-m02)   </os>
	I1028 11:11:52.384561  150723 main.go:141] libmachine: (ha-928358-m02)   <devices>
	I1028 11:11:52.384580  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='cdrom'>
	I1028 11:11:52.384598  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/boot2docker.iso'/>
	I1028 11:11:52.384615  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:52.384624  150723 main.go:141] libmachine: (ha-928358-m02)       <readonly/>
	I1028 11:11:52.384628  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384634  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='disk'>
	I1028 11:11:52.384642  150723 main.go:141] libmachine: (ha-928358-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:52.384650  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk'/>
	I1028 11:11:52.384657  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:11:52.384661  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384668  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384674  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='mk-ha-928358'/>
	I1028 11:11:52.384681  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384688  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384692  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384698  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='default'/>
	I1028 11:11:52.384703  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384708  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384713  150723 main.go:141] libmachine: (ha-928358-m02)     <serial type='pty'>
	I1028 11:11:52.384742  150723 main.go:141] libmachine: (ha-928358-m02)       <target port='0'/>
	I1028 11:11:52.384769  150723 main.go:141] libmachine: (ha-928358-m02)     </serial>
	I1028 11:11:52.384791  150723 main.go:141] libmachine: (ha-928358-m02)     <console type='pty'>
	I1028 11:11:52.384814  150723 main.go:141] libmachine: (ha-928358-m02)       <target type='serial' port='0'/>
	I1028 11:11:52.384828  150723 main.go:141] libmachine: (ha-928358-m02)     </console>
	I1028 11:11:52.384840  150723 main.go:141] libmachine: (ha-928358-m02)     <rng model='virtio'>
	I1028 11:11:52.384852  150723 main.go:141] libmachine: (ha-928358-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:11:52.384859  150723 main.go:141] libmachine: (ha-928358-m02)     </rng>
	I1028 11:11:52.384865  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384887  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384900  150723 main.go:141] libmachine: (ha-928358-m02)   </devices>
	I1028 11:11:52.384910  150723 main.go:141] libmachine: (ha-928358-m02) </domain>
	I1028 11:11:52.384921  150723 main.go:141] libmachine: (ha-928358-m02) 
	I1028 11:11:52.391941  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:67:49 in network default
	I1028 11:11:52.392560  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring networks are active...
	I1028 11:11:52.392579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:52.393436  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network default is active
	I1028 11:11:52.393821  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network mk-ha-928358 is active
	I1028 11:11:52.394171  150723 main.go:141] libmachine: (ha-928358-m02) Getting domain xml...
	I1028 11:11:52.394853  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:53.630024  150723 main.go:141] libmachine: (ha-928358-m02) Waiting to get IP...
	I1028 11:11:53.630962  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.631449  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.631495  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.631430  151103 retry.go:31] will retry after 231.171985ms: waiting for machine to come up
	I1028 11:11:53.864111  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.864512  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.864546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.864499  151103 retry.go:31] will retry after 296.507043ms: waiting for machine to come up
	I1028 11:11:54.163050  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.163543  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.163593  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.163496  151103 retry.go:31] will retry after 357.855811ms: waiting for machine to come up
	I1028 11:11:54.523089  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.523546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.523575  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.523481  151103 retry.go:31] will retry after 569.003787ms: waiting for machine to come up
	I1028 11:11:55.094333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.094770  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.094795  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.094741  151103 retry.go:31] will retry after 495.310626ms: waiting for machine to come up
	I1028 11:11:55.591480  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.592037  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.592065  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.591984  151103 retry.go:31] will retry after 697.027358ms: waiting for machine to come up
	I1028 11:11:56.291011  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:56.291427  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:56.291455  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:56.291390  151103 retry.go:31] will retry after 819.98241ms: waiting for machine to come up
	I1028 11:11:57.112476  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:57.112920  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:57.112950  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:57.112861  151103 retry.go:31] will retry after 1.468451423s: waiting for machine to come up
	I1028 11:11:58.582633  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:58.583095  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:58.583117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:58.583044  151103 retry.go:31] will retry after 1.732332827s: waiting for machine to come up
	I1028 11:12:00.316579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:00.316974  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:00.317005  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:00.316915  151103 retry.go:31] will retry after 1.701246598s: waiting for machine to come up
	I1028 11:12:02.020279  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:02.020762  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:02.020780  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:02.020732  151103 retry.go:31] will retry after 2.239954262s: waiting for machine to come up
	I1028 11:12:04.262705  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:04.263103  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:04.263134  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:04.263076  151103 retry.go:31] will retry after 3.584543805s: waiting for machine to come up
	I1028 11:12:07.848824  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:07.849223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:07.849246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:07.849186  151103 retry.go:31] will retry after 4.083747812s: waiting for machine to come up
	I1028 11:12:11.934986  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:11.935519  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:11.935541  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:11.935464  151103 retry.go:31] will retry after 5.450262186s: waiting for machine to come up
	I1028 11:12:17.387598  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388014  150723 main.go:141] libmachine: (ha-928358-m02) Found IP for machine: 192.168.39.15
	I1028 11:12:17.388040  150723 main.go:141] libmachine: (ha-928358-m02) Reserving static IP address...
	I1028 11:12:17.388061  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has current primary IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388484  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find host DHCP lease matching {name: "ha-928358-m02", mac: "52:54:00:6f:70:28", ip: "192.168.39.15"} in network mk-ha-928358
	I1028 11:12:17.468628  150723 main.go:141] libmachine: (ha-928358-m02) Reserved static IP address: 192.168.39.15
	I1028 11:12:17.468659  150723 main.go:141] libmachine: (ha-928358-m02) Waiting for SSH to be available...
	I1028 11:12:17.468668  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Getting to WaitForSSH function...
	I1028 11:12:17.471501  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472007  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.472034  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472218  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH client type: external
	I1028 11:12:17.472251  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa (-rw-------)
	I1028 11:12:17.472281  150723 main.go:141] libmachine: (ha-928358-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:12:17.472296  150723 main.go:141] libmachine: (ha-928358-m02) DBG | About to run SSH command:
	I1028 11:12:17.472313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | exit 0
	I1028 11:12:17.602076  150723 main.go:141] libmachine: (ha-928358-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:12:17.602372  150723 main.go:141] libmachine: (ha-928358-m02) KVM machine creation complete!
	I1028 11:12:17.602744  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:17.603321  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603533  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603697  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:12:17.603728  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetState
	I1028 11:12:17.605258  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:12:17.605275  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:12:17.605282  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:12:17.605291  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.607333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607701  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.607721  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607912  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.608143  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608313  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608439  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.608583  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.608808  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.608820  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:12:17.721307  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:17.721336  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:12:17.721347  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.724798  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725194  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.725223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725409  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.725636  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725966  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.726099  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.726262  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.726279  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:12:17.838473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:12:17.838586  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:12:17.838602  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:12:17.838613  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.838892  150723 buildroot.go:166] provisioning hostname "ha-928358-m02"
	I1028 11:12:17.838917  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.839093  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.841883  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842317  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.842339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842472  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.842669  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842831  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842971  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.843156  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.843326  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.843338  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m02 && echo "ha-928358-m02" | sudo tee /etc/hostname
	I1028 11:12:17.968498  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m02
	
	I1028 11:12:17.968528  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.971246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971623  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.971653  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.971988  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972158  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972315  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.972474  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.972671  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.972693  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:12:18.095026  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:18.095079  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:12:18.095099  150723 buildroot.go:174] setting up certificates
	I1028 11:12:18.095111  150723 provision.go:84] configureAuth start
	I1028 11:12:18.095125  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:18.095406  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.098183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098549  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.098574  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098726  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.100797  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.101209  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101422  150723 provision.go:143] copyHostCerts
	I1028 11:12:18.101450  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101483  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:12:18.101493  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101585  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:12:18.101707  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101736  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:12:18.101747  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101792  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:12:18.101860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101880  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:12:18.101884  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101906  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:12:18.101972  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m02 san=[127.0.0.1 192.168.39.15 ha-928358-m02 localhost minikube]
	I1028 11:12:18.196094  150723 provision.go:177] copyRemoteCerts
	I1028 11:12:18.196152  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:12:18.196173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.198995  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199315  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.199339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199521  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.199709  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.199854  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.199983  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.288841  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:12:18.288936  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:12:18.314840  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:12:18.314910  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:12:18.341393  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:12:18.341485  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:12:18.366854  150723 provision.go:87] duration metric: took 271.722974ms to configureAuth
	I1028 11:12:18.366893  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:12:18.367124  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:18.367212  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.370267  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.370639  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370796  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.371029  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371307  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.371456  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.371620  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.371634  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:12:18.612895  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:12:18.612923  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:12:18.612931  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetURL
	I1028 11:12:18.614354  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using libvirt version 6000000
	I1028 11:12:18.616667  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617056  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.617087  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617192  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:12:18.617204  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:12:18.617212  150723 client.go:171] duration metric: took 26.86402649s to LocalClient.Create
	I1028 11:12:18.617234  150723 start.go:167] duration metric: took 26.864111247s to libmachine.API.Create "ha-928358"
	I1028 11:12:18.617248  150723 start.go:293] postStartSetup for "ha-928358-m02" (driver="kvm2")
	I1028 11:12:18.617264  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:12:18.617289  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.617583  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:12:18.617614  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.619991  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620293  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.620324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620465  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.620632  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.620807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.620947  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.709453  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:12:18.714006  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:12:18.714050  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:12:18.714135  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:12:18.714212  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:12:18.714223  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:12:18.714317  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:12:18.725069  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:18.750381  150723 start.go:296] duration metric: took 133.112799ms for postStartSetup
	I1028 11:12:18.750443  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:18.751083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.753465  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.753830  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.753860  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.754104  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:12:18.754302  150723 start.go:128] duration metric: took 27.019366662s to createHost
	I1028 11:12:18.754324  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.756274  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756584  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.756606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756746  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.756928  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757211  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.757395  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.757617  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.757632  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:12:18.870465  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113938.848702185
	
	I1028 11:12:18.870492  150723 fix.go:216] guest clock: 1730113938.848702185
	I1028 11:12:18.870502  150723 fix.go:229] Guest: 2024-10-28 11:12:18.848702185 +0000 UTC Remote: 2024-10-28 11:12:18.754313813 +0000 UTC m=+79.331053022 (delta=94.388372ms)
	I1028 11:12:18.870523  150723 fix.go:200] guest clock delta is within tolerance: 94.388372ms
	I1028 11:12:18.870530  150723 start.go:83] releasing machines lock for "ha-928358-m02", held for 27.135687063s
	I1028 11:12:18.870557  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.870818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.873499  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.873921  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.873952  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.876354  150723 out.go:177] * Found network options:
	I1028 11:12:18.877803  150723 out.go:177]   - NO_PROXY=192.168.39.206
	W1028 11:12:18.879297  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.879332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.879863  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880042  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880145  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:12:18.880199  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	W1028 11:12:18.880223  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.880307  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:12:18.880332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.882741  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883009  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883032  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883152  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883178  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883365  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883531  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.883570  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883597  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883673  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.883773  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883886  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883979  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.884097  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:19.140607  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:12:19.146803  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:12:19.146880  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:12:19.163725  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:12:19.163760  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:12:19.163823  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:12:19.180717  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:12:19.195299  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:12:19.195367  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:12:19.209555  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:12:19.223597  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:12:19.345039  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:12:19.505186  150723 docker.go:233] disabling docker service ...
	I1028 11:12:19.505264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:12:19.520570  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:12:19.534795  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:12:19.656005  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:12:19.777835  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:12:19.793076  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:12:19.813202  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:12:19.813275  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.824795  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:12:19.824878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.836376  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.847788  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.858444  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:12:19.869710  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.880881  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.900116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.910944  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:12:19.921199  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:12:19.921284  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:12:19.936681  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:12:19.954317  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:20.080754  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:12:20.180414  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:12:20.180503  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:12:20.185906  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:12:20.185979  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:12:20.190133  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:12:20.233553  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:12:20.233626  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.262764  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.298972  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:12:20.300478  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:12:20.301810  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:20.304361  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304709  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:20.304731  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304901  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:12:20.309556  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:20.323672  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:12:20.323882  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:20.324235  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.324287  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.339013  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I1028 11:12:20.339463  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.340030  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.340052  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.340399  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.340615  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:12:20.342314  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:20.342631  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.342680  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.357539  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I1028 11:12:20.358002  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.358498  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.358519  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.359008  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.359212  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:20.359422  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.15
	I1028 11:12:20.359434  150723 certs.go:194] generating shared ca certs ...
	I1028 11:12:20.359450  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.359573  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:12:20.359614  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:12:20.359623  150723 certs.go:256] generating profile certs ...
	I1028 11:12:20.359689  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:12:20.359712  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94
	I1028 11:12:20.359727  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.254]
	I1028 11:12:20.442903  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 ...
	I1028 11:12:20.442934  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94: {Name:mk85a4e1a50b9026ab3d6dc4495b321bb7e02ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443115  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 ...
	I1028 11:12:20.443128  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94: {Name:mk7f773e25633de1a7b22c2c20b13ade22c5f211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443202  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:12:20.443334  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:12:20.443463  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:12:20.443480  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:12:20.443493  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:12:20.443506  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:12:20.443519  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:12:20.443535  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:12:20.443547  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:12:20.443559  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:12:20.443571  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:12:20.443620  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:12:20.443647  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:12:20.443657  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:12:20.443683  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:12:20.443705  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:12:20.443728  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:12:20.443767  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:20.443793  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:20.443806  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:12:20.443820  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:12:20.443852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:20.446971  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447376  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:20.447407  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447537  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:20.447754  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:20.447909  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:20.448040  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:20.533935  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:12:20.540194  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:12:20.553555  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:12:20.558471  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:12:20.571472  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:12:20.576267  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:12:20.588003  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:12:20.593338  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:12:20.605038  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:12:20.609724  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:12:20.623742  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:12:20.628679  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:12:20.640341  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:12:20.667017  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:12:20.692744  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:12:20.718588  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:12:20.748034  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:12:20.775373  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:12:20.802947  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:12:20.831097  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:12:20.858123  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:12:20.882703  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:12:20.907628  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:12:20.933325  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:12:20.951380  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:12:20.970398  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:12:20.988118  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:12:21.006403  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:12:21.027746  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:12:21.046174  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:12:21.066465  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:12:21.072838  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:12:21.086541  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091618  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091672  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.098303  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:12:21.110328  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:12:21.122629  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127701  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127772  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.134271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:12:21.146879  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:12:21.159782  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165113  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165173  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.171693  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:12:21.183939  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:12:21.188218  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:12:21.188285  150723 kubeadm.go:934] updating node {m02 192.168.39.15 8443 v1.31.2 crio true true} ...
	I1028 11:12:21.188380  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:12:21.188402  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:12:21.188440  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:12:21.207772  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:12:21.207836  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:12:21.207903  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.219161  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:12:21.219233  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.229788  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:12:21.229822  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229868  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:12:21.229883  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229901  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:12:21.234643  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:12:21.234682  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:12:22.169217  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.169290  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.175155  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:12:22.175187  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:12:22.612156  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:12:22.630404  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.630517  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.635637  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:12:22.635690  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:12:22.984793  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:12:22.995829  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:12:23.014631  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:12:23.033132  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:12:23.051694  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:12:23.056057  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:23.069704  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:23.193632  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:23.213616  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:23.214094  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:23.214154  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:23.229467  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I1028 11:12:23.229946  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:23.230470  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:23.230493  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:23.230811  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:23.231005  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:23.231156  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:12:23.231250  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:12:23.231265  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:23.234605  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235105  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:23.235130  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235484  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:23.235658  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:23.235817  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:23.235978  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:23.587402  150723 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:23.587450  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443"
	I1028 11:12:49.062311  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443": (25.474831461s)
	I1028 11:12:49.062358  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:12:49.750628  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m02 minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:12:49.901989  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:12:50.021163  150723 start.go:319] duration metric: took 26.789999674s to joinCluster
	I1028 11:12:50.021261  150723 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:50.021588  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:50.022686  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:12:50.024027  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:50.259666  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:50.294975  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:12:50.295261  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:12:50.295325  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:12:50.295539  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:12:50.295634  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.295644  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.295655  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.295661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.311123  150723 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1028 11:12:50.796718  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.796750  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.796761  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.796767  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.800704  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:51.296741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.296771  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.296783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.296789  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.301317  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:51.796429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.796461  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.796472  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.796479  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.902786  150723 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I1028 11:12:52.295866  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.295889  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.295896  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.295902  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.299707  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:52.300296  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:52.796802  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.796836  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.796848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.796854  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.801105  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:53.296430  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.296464  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.296476  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.296482  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.300401  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:53.796454  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.796475  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.796483  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.796487  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.800686  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:54.296632  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.296658  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.296669  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.296675  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.430413  150723 round_trippers.go:574] Response Status: 200 OK in 133 milliseconds
	I1028 11:12:54.431260  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:54.796228  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.796251  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.796260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.796297  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.799743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:55.295741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.295769  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.295779  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.295784  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.300264  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:55.796141  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.796166  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.796177  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.796183  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.799984  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.296002  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.296025  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.296033  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.296038  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.299236  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.796285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.796327  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.796338  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.796343  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.801079  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:56.801722  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:57.295973  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.296010  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.296019  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.296022  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.300070  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:57.796110  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.796138  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.796156  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.800286  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:58.296657  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.296684  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.296694  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.296700  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.300601  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:58.795760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.795783  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.795791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.795795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.799253  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.296427  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.296448  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.296457  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.296461  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.300112  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.300577  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:59.795852  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.795874  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.795882  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.795886  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.799187  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.296355  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.296376  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.296385  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.296388  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.300090  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.796212  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.796241  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.796250  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.796255  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.799643  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.296675  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.296698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.296706  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.296720  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.300506  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.300981  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:01.795747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.795781  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.795793  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.795800  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.799384  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.296561  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.296587  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.296595  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.296601  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.300227  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.796111  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.796139  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.796175  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.799502  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.295908  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.295932  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.295940  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.295944  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.299608  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.796579  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.796602  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.796611  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.796615  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.801307  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:03.802803  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:04.296022  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.296047  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.296055  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.296058  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.300556  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:04.796471  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.796494  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.796502  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.796507  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.801460  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:05.296387  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.296409  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.296417  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.296422  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.299743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:05.796148  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.796171  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.796179  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.796184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.801488  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:06.296441  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.296475  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.296487  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.296492  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.300636  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:06.301140  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:06.796015  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.796054  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.796067  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.796073  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.802178  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:07.295805  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.295832  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.295841  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.295845  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.300831  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:07.796368  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.796395  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.796407  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.796413  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.800287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.295819  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.295846  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.295856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.295862  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.303573  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:13:08.304813  150723 node_ready.go:49] node "ha-928358-m02" has status "Ready":"True"
	I1028 11:13:08.304842  150723 node_ready.go:38] duration metric: took 18.009284836s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:13:08.304855  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:08.304964  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:08.304977  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.304986  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.304996  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.314253  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:13:08.322556  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.322661  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:13:08.322674  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.322686  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.322694  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.325598  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.326235  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.326251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.326262  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.326267  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.329653  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.330306  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.330330  150723 pod_ready.go:82] duration metric: took 7.745243ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330344  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330420  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:13:08.330431  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.330443  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.330451  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.333854  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.334683  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.334698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.334709  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.334717  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.338575  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.339125  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.339151  150723 pod_ready.go:82] duration metric: took 8.79493ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339166  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339239  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:13:08.339251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.339260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.339266  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.342147  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.342887  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.342903  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.342914  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.342919  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.345586  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.346017  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.346037  150723 pod_ready.go:82] duration metric: took 6.859007ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346049  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346126  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:13:08.346136  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.346149  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.346155  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.349837  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.350760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.350776  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.350783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.350787  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.354111  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.354776  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.354797  150723 pod_ready.go:82] duration metric: took 8.74104ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.354818  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.496252  150723 request.go:632] Waited for 141.345028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496320  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.496333  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.496338  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.500168  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.696151  150723 request.go:632] Waited for 195.353851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696219  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696228  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.696240  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.696249  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.700151  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.701139  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.701160  150723 pod_ready.go:82] duration metric: took 346.331354ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.701174  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.896292  150723 request.go:632] Waited for 195.012978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896361  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896371  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.896387  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.896396  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.900050  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.096401  150723 request.go:632] Waited for 195.396634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096481  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.096489  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.096493  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.100986  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:09.101422  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.101442  150723 pod_ready.go:82] duration metric: took 400.258829ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.101456  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.296560  150723 request.go:632] Waited for 195.02851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296638  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296643  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.296654  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.296672  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.300596  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.496746  150723 request.go:632] Waited for 195.271102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496832  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496844  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.496856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.496863  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.500375  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.501182  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.501208  150723 pod_ready.go:82] duration metric: took 399.742852ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.501223  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.696672  150723 request.go:632] Waited for 195.364831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696753  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.696761  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.696765  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.700353  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.896500  150723 request.go:632] Waited for 195.402622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896557  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896562  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.896570  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.896574  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.899876  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.900586  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.900606  150723 pod_ready.go:82] duration metric: took 399.370555ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.900621  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.096828  150723 request.go:632] Waited for 196.099526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096889  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096895  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.096902  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.096907  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.100607  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.295935  150723 request.go:632] Waited for 194.296247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296028  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296036  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.296047  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.296052  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.299514  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.299992  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.300013  150723 pod_ready.go:82] duration metric: took 399.384578ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.300033  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.496260  150723 request.go:632] Waited for 196.135494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496330  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496339  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.496347  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.496352  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.500702  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:10.696747  150723 request.go:632] Waited for 195.398969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696828  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696834  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.696842  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.696849  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.700510  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.701486  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.701505  150723 pod_ready.go:82] duration metric: took 401.465094ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.701515  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.896720  150723 request.go:632] Waited for 195.109133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896777  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896783  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.896790  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.896795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.900315  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.096400  150723 request.go:632] Waited for 195.36981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096478  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096483  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.096493  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.096499  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.100065  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.100566  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.100590  150723 pod_ready.go:82] duration metric: took 399.065558ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.100600  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.296785  150723 request.go:632] Waited for 196.108788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296873  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296881  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.296891  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.296896  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.300760  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.495907  150723 request.go:632] Waited for 194.292764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.495994  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.496001  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.496011  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.496021  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.500420  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.500960  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.500979  150723 pod_ready.go:82] duration metric: took 400.371324ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.500991  150723 pod_ready.go:39] duration metric: took 3.196117998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:11.501012  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:13:11.501071  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:13:11.518775  150723 api_server.go:72] duration metric: took 21.497464525s to wait for apiserver process to appear ...
	I1028 11:13:11.518811  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:13:11.518839  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:13:11.523103  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:13:11.523168  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:13:11.523173  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.523180  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.523189  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.524064  150723 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:13:11.524163  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:13:11.524189  150723 api_server.go:131] duration metric: took 5.370992ms to wait for apiserver health ...
	I1028 11:13:11.524197  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:13:11.696656  150723 request.go:632] Waited for 172.384226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696727  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696733  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.696740  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.696744  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.702489  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:11.707749  150723 system_pods.go:59] 17 kube-system pods found
	I1028 11:13:11.707791  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:11.707798  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:11.707802  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:11.707805  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:11.707808  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:11.707812  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:11.707815  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:11.707818  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:11.707821  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:11.707824  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:11.707828  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:11.707831  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:11.707833  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:11.707837  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:11.707840  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:11.707843  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:11.707847  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:11.707852  150723 system_pods.go:74] duration metric: took 183.650264ms to wait for pod list to return data ...
	I1028 11:13:11.707863  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:13:11.895935  150723 request.go:632] Waited for 187.997842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895992  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895997  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.896004  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.896009  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.900031  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.900269  150723 default_sa.go:45] found service account: "default"
	I1028 11:13:11.900286  150723 default_sa.go:55] duration metric: took 192.416558ms for default service account to be created ...
	I1028 11:13:11.900298  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:13:12.096570  150723 request.go:632] Waited for 196.184771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096668  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096678  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.096690  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.096703  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.102990  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:13:12.107971  150723 system_pods.go:86] 17 kube-system pods found
	I1028 11:13:12.108008  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:12.108017  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:12.108022  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:12.108027  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:12.108032  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:12.108037  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:12.108044  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:12.108051  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:12.108056  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:12.108062  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:12.108067  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:12.108072  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:12.108076  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:12.108082  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:12.108088  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:12.108094  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:12.108101  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:12.108116  150723 system_pods.go:126] duration metric: took 207.810112ms to wait for k8s-apps to be running ...
	I1028 11:13:12.108138  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:13:12.108196  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:12.125765  150723 system_svc.go:56] duration metric: took 17.59726ms WaitForService to wait for kubelet
	I1028 11:13:12.125805  150723 kubeadm.go:582] duration metric: took 22.104503497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:13:12.125835  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:13:12.296271  150723 request.go:632] Waited for 170.346607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296352  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296358  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.296365  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.296370  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.301322  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:12.302235  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302261  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302297  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302303  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302310  150723 node_conditions.go:105] duration metric: took 176.469824ms to run NodePressure ...
	I1028 11:13:12.302331  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:13:12.302371  150723 start.go:255] writing updated cluster config ...
	I1028 11:13:12.304722  150723 out.go:201] 
	I1028 11:13:12.306493  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:12.306595  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.308496  150723 out.go:177] * Starting "ha-928358-m03" control-plane node in "ha-928358" cluster
	I1028 11:13:12.310210  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:13:12.310234  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:13:12.310336  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:13:12.310347  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:13:12.310430  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.310601  150723 start.go:360] acquireMachinesLock for ha-928358-m03: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:13:12.310642  150723 start.go:364] duration metric: took 22.061µs to acquireMachinesLock for "ha-928358-m03"
	I1028 11:13:12.310662  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:12.310748  150723 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:13:12.312443  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:13:12.312555  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:12.312596  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:12.327768  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I1028 11:13:12.328249  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:12.328745  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:12.328765  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:12.329102  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:12.329311  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:12.329448  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:12.329611  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:13:12.329642  150723 client.go:168] LocalClient.Create starting
	I1028 11:13:12.329670  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:13:12.329703  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329720  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329768  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:13:12.329788  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329799  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329815  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:13:12.329826  150723 main.go:141] libmachine: (ha-928358-m03) Calling .PreCreateCheck
	I1028 11:13:12.329995  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:12.330372  150723 main.go:141] libmachine: Creating machine...
	I1028 11:13:12.330386  150723 main.go:141] libmachine: (ha-928358-m03) Calling .Create
	I1028 11:13:12.330528  150723 main.go:141] libmachine: (ha-928358-m03) Creating KVM machine...
	I1028 11:13:12.331834  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing default KVM network
	I1028 11:13:12.332000  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing private KVM network mk-ha-928358
	I1028 11:13:12.332124  150723 main.go:141] libmachine: (ha-928358-m03) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.332140  150723 main.go:141] libmachine: (ha-928358-m03) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:13:12.332221  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.332127  151534 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.332333  150723 main.go:141] libmachine: (ha-928358-m03) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:13:12.597391  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.597227  151534 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa...
	I1028 11:13:12.699922  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699777  151534 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk...
	I1028 11:13:12.699960  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing magic tar header
	I1028 11:13:12.699975  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing SSH key tar header
	I1028 11:13:12.699986  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699933  151534 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.700170  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 (perms=drwx------)
	I1028 11:13:12.700205  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:13:12.700218  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03
	I1028 11:13:12.700232  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:13:12.700244  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:13:12.700258  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:13:12.700271  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.700287  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:13:12.700300  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:13:12.700313  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:13:12.700325  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:13:12.700339  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:12.700363  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:13:12.700371  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home
	I1028 11:13:12.700395  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Skipping /home - not owner
	I1028 11:13:12.701297  150723 main.go:141] libmachine: (ha-928358-m03) define libvirt domain using xml: 
	I1028 11:13:12.701328  150723 main.go:141] libmachine: (ha-928358-m03) <domain type='kvm'>
	I1028 11:13:12.701339  150723 main.go:141] libmachine: (ha-928358-m03)   <name>ha-928358-m03</name>
	I1028 11:13:12.701346  150723 main.go:141] libmachine: (ha-928358-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:13:12.701358  150723 main.go:141] libmachine: (ha-928358-m03)   <vcpu>2</vcpu>
	I1028 11:13:12.701364  150723 main.go:141] libmachine: (ha-928358-m03)   <features>
	I1028 11:13:12.701373  150723 main.go:141] libmachine: (ha-928358-m03)     <acpi/>
	I1028 11:13:12.701383  150723 main.go:141] libmachine: (ha-928358-m03)     <apic/>
	I1028 11:13:12.701391  150723 main.go:141] libmachine: (ha-928358-m03)     <pae/>
	I1028 11:13:12.701404  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701415  150723 main.go:141] libmachine: (ha-928358-m03)   </features>
	I1028 11:13:12.701423  150723 main.go:141] libmachine: (ha-928358-m03)   <cpu mode='host-passthrough'>
	I1028 11:13:12.701433  150723 main.go:141] libmachine: (ha-928358-m03)   
	I1028 11:13:12.701445  150723 main.go:141] libmachine: (ha-928358-m03)   </cpu>
	I1028 11:13:12.701456  150723 main.go:141] libmachine: (ha-928358-m03)   <os>
	I1028 11:13:12.701463  150723 main.go:141] libmachine: (ha-928358-m03)     <type>hvm</type>
	I1028 11:13:12.701472  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='cdrom'/>
	I1028 11:13:12.701478  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='hd'/>
	I1028 11:13:12.701513  150723 main.go:141] libmachine: (ha-928358-m03)     <bootmenu enable='no'/>
	I1028 11:13:12.701555  150723 main.go:141] libmachine: (ha-928358-m03)   </os>
	I1028 11:13:12.701565  150723 main.go:141] libmachine: (ha-928358-m03)   <devices>
	I1028 11:13:12.701573  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='cdrom'>
	I1028 11:13:12.701585  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/boot2docker.iso'/>
	I1028 11:13:12.701593  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:13:12.701600  150723 main.go:141] libmachine: (ha-928358-m03)       <readonly/>
	I1028 11:13:12.701607  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701622  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='disk'>
	I1028 11:13:12.701635  150723 main.go:141] libmachine: (ha-928358-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:13:12.701651  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk'/>
	I1028 11:13:12.701662  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:13:12.701673  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701683  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701717  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='mk-ha-928358'/>
	I1028 11:13:12.701741  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701754  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701765  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701776  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='default'/>
	I1028 11:13:12.701787  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701800  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701809  150723 main.go:141] libmachine: (ha-928358-m03)     <serial type='pty'>
	I1028 11:13:12.701821  150723 main.go:141] libmachine: (ha-928358-m03)       <target port='0'/>
	I1028 11:13:12.701833  150723 main.go:141] libmachine: (ha-928358-m03)     </serial>
	I1028 11:13:12.701844  150723 main.go:141] libmachine: (ha-928358-m03)     <console type='pty'>
	I1028 11:13:12.701855  150723 main.go:141] libmachine: (ha-928358-m03)       <target type='serial' port='0'/>
	I1028 11:13:12.701866  150723 main.go:141] libmachine: (ha-928358-m03)     </console>
	I1028 11:13:12.701874  150723 main.go:141] libmachine: (ha-928358-m03)     <rng model='virtio'>
	I1028 11:13:12.701883  150723 main.go:141] libmachine: (ha-928358-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:13:12.701898  150723 main.go:141] libmachine: (ha-928358-m03)     </rng>
	I1028 11:13:12.701909  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701917  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701927  150723 main.go:141] libmachine: (ha-928358-m03)   </devices>
	I1028 11:13:12.701935  150723 main.go:141] libmachine: (ha-928358-m03) </domain>
	I1028 11:13:12.701944  150723 main.go:141] libmachine: (ha-928358-m03) 
	I1028 11:13:12.709093  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:b5:fb:00 in network default
	I1028 11:13:12.709827  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring networks are active...
	I1028 11:13:12.709849  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:12.710555  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network default is active
	I1028 11:13:12.710786  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network mk-ha-928358 is active
	I1028 11:13:12.711115  150723 main.go:141] libmachine: (ha-928358-m03) Getting domain xml...
	I1028 11:13:12.711807  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:13.995752  150723 main.go:141] libmachine: (ha-928358-m03) Waiting to get IP...
	I1028 11:13:13.996563  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:13.997045  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:13.997085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:13.997018  151534 retry.go:31] will retry after 234.151571ms: waiting for machine to come up
	I1028 11:13:14.232519  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.233064  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.233096  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.232999  151534 retry.go:31] will retry after 249.582339ms: waiting for machine to come up
	I1028 11:13:14.484383  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.484878  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.484915  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.484812  151534 retry.go:31] will retry after 409.553215ms: waiting for machine to come up
	I1028 11:13:14.896380  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.896855  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.896887  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.896797  151534 retry.go:31] will retry after 412.085621ms: waiting for machine to come up
	I1028 11:13:15.310086  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.310769  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.310799  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.310719  151534 retry.go:31] will retry after 651.315136ms: waiting for machine to come up
	I1028 11:13:15.963589  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.964049  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.964078  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.963990  151534 retry.go:31] will retry after 936.522294ms: waiting for machine to come up
	I1028 11:13:16.902173  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:16.902668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:16.902689  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:16.902618  151534 retry.go:31] will retry after 774.455135ms: waiting for machine to come up
	I1028 11:13:17.679023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:17.679574  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:17.679600  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:17.679540  151534 retry.go:31] will retry after 1.069131352s: waiting for machine to come up
	I1028 11:13:18.750780  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:18.751352  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:18.751375  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:18.751284  151534 retry.go:31] will retry after 1.587573663s: waiting for machine to come up
	I1028 11:13:20.340206  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:20.340612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:20.340643  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:20.340566  151534 retry.go:31] will retry after 1.424108777s: waiting for machine to come up
	I1028 11:13:21.766872  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:21.767376  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:21.767397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:21.767337  151534 retry.go:31] will retry after 1.867673803s: waiting for machine to come up
	I1028 11:13:23.637608  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:23.638075  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:23.638103  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:23.638049  151534 retry.go:31] will retry after 3.385284423s: waiting for machine to come up
	I1028 11:13:27.027812  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:27.028397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:27.028423  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:27.028342  151534 retry.go:31] will retry after 4.143137357s: waiting for machine to come up
	I1028 11:13:31.174612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:31.174990  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:31.175020  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:31.174951  151534 retry.go:31] will retry after 3.870983412s: waiting for machine to come up
	I1028 11:13:35.049044  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has current primary IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049716  150723 main.go:141] libmachine: (ha-928358-m03) Found IP for machine: 192.168.39.44
	I1028 11:13:35.049734  150723 main.go:141] libmachine: (ha-928358-m03) Reserving static IP address...
	I1028 11:13:35.050296  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find host DHCP lease matching {name: "ha-928358-m03", mac: "52:54:00:7e:d3:f9", ip: "192.168.39.44"} in network mk-ha-928358
	I1028 11:13:35.126256  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Getting to WaitForSSH function...
	I1028 11:13:35.126303  150723 main.go:141] libmachine: (ha-928358-m03) Reserved static IP address: 192.168.39.44
	I1028 11:13:35.126318  150723 main.go:141] libmachine: (ha-928358-m03) Waiting for SSH to be available...
	I1028 11:13:35.128851  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129272  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.129315  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129446  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH client type: external
	I1028 11:13:35.129476  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa (-rw-------)
	I1028 11:13:35.129507  150723 main.go:141] libmachine: (ha-928358-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:13:35.129520  150723 main.go:141] libmachine: (ha-928358-m03) DBG | About to run SSH command:
	I1028 11:13:35.129564  150723 main.go:141] libmachine: (ha-928358-m03) DBG | exit 0
	I1028 11:13:35.253921  150723 main.go:141] libmachine: (ha-928358-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:13:35.254211  150723 main.go:141] libmachine: (ha-928358-m03) KVM machine creation complete!
	I1028 11:13:35.254512  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:35.255052  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255255  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255399  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:13:35.255411  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetState
	I1028 11:13:35.256908  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:13:35.256921  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:13:35.256927  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:13:35.256932  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.259735  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260211  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.260237  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260436  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.260625  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260784  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260899  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.261057  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.261307  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.261321  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:13:35.360859  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.360890  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:13:35.360902  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.364454  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.364848  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.364904  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.365213  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.365431  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365607  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365742  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.365932  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.366116  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.366130  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:13:35.470987  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:13:35.471094  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:13:35.471109  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:13:35.471120  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471399  150723 buildroot.go:166] provisioning hostname "ha-928358-m03"
	I1028 11:13:35.471424  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471622  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.474085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474509  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.474542  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474681  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.474871  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475021  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475156  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.475305  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.475494  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.475510  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m03 && echo "ha-928358-m03" | sudo tee /etc/hostname
	I1028 11:13:35.593400  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m03
	
	I1028 11:13:35.593429  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.596415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596740  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.596767  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596962  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.597183  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597361  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597490  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.597704  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.597875  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.597892  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:13:35.715751  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.715791  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:13:35.715811  150723 buildroot.go:174] setting up certificates
	I1028 11:13:35.715821  150723 provision.go:84] configureAuth start
	I1028 11:13:35.715834  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.716106  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:35.718868  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719187  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.719219  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719354  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.721477  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721760  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.721790  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721917  150723 provision.go:143] copyHostCerts
	I1028 11:13:35.721979  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722032  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:13:35.722044  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722140  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:13:35.722245  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722278  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:13:35.722289  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722332  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:13:35.722402  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722429  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:13:35.722435  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722459  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:13:35.722531  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m03 san=[127.0.0.1 192.168.39.44 ha-928358-m03 localhost minikube]
	I1028 11:13:35.825404  150723 provision.go:177] copyRemoteCerts
	I1028 11:13:35.825459  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:13:35.825483  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.828415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.828803  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828972  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.829151  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.829337  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.829485  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:35.913472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:13:35.913575  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:13:35.940828  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:13:35.940904  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:13:35.968009  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:13:35.968078  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:13:35.997592  150723 provision.go:87] duration metric: took 281.755193ms to configureAuth
	I1028 11:13:35.997618  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:13:35.997801  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:35.997869  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.000450  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.000935  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.000970  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.001165  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.001385  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001734  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.001893  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.002062  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.002076  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:13:36.221329  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:13:36.221364  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:13:36.221433  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetURL
	I1028 11:13:36.222571  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using libvirt version 6000000
	I1028 11:13:36.224781  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225156  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.225179  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225329  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:13:36.225344  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:13:36.225353  150723 client.go:171] duration metric: took 23.895703285s to LocalClient.Create
	I1028 11:13:36.225379  150723 start.go:167] duration metric: took 23.895771231s to libmachine.API.Create "ha-928358"
	I1028 11:13:36.225390  150723 start.go:293] postStartSetup for "ha-928358-m03" (driver="kvm2")
	I1028 11:13:36.225399  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:13:36.225413  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.225669  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:13:36.225696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.227681  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.227995  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.228023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.228147  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.228314  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.228474  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.228601  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.313594  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:13:36.318443  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:13:36.318477  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:13:36.318544  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:13:36.318614  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:13:36.318624  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:13:36.318705  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:13:36.330227  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:36.357995  150723 start.go:296] duration metric: took 132.588764ms for postStartSetup
	I1028 11:13:36.358059  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:36.358728  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.361773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362238  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.362267  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362589  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:36.362828  150723 start.go:128] duration metric: took 24.052057424s to createHost
	I1028 11:13:36.362855  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.365684  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.365985  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.366016  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.366211  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.366426  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.366842  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.367055  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.367079  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:13:36.470814  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114016.442636655
	
	I1028 11:13:36.470843  150723 fix.go:216] guest clock: 1730114016.442636655
	I1028 11:13:36.470853  150723 fix.go:229] Guest: 2024-10-28 11:13:36.442636655 +0000 UTC Remote: 2024-10-28 11:13:36.362843133 +0000 UTC m=+156.939582341 (delta=79.793522ms)
	I1028 11:13:36.470869  150723 fix.go:200] guest clock delta is within tolerance: 79.793522ms
	I1028 11:13:36.470874  150723 start.go:83] releasing machines lock for "ha-928358-m03", held for 24.160222671s
	I1028 11:13:36.470894  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.471174  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.473802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.474314  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.474345  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.476703  150723 out.go:177] * Found network options:
	I1028 11:13:36.478253  150723 out.go:177]   - NO_PROXY=192.168.39.206,192.168.39.15
	W1028 11:13:36.479492  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.479516  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.479532  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480171  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480372  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480474  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:13:36.480516  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	W1028 11:13:36.480627  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.480648  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.480710  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:13:36.480733  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.483390  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483597  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.483836  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483976  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484137  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484152  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.484171  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.484240  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484323  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484392  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.484441  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484542  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484643  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.722609  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:13:36.728895  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:13:36.728959  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:13:36.746783  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:13:36.746814  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:13:36.746889  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:13:36.764176  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:13:36.780539  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:13:36.780611  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:13:36.795323  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:13:36.811733  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:13:36.943649  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:13:37.116480  150723 docker.go:233] disabling docker service ...
	I1028 11:13:37.116541  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:13:37.131848  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:13:37.146207  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:13:37.271760  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:13:37.397315  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:13:37.413150  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:13:37.433193  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:13:37.433274  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.448784  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:13:37.448861  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.461820  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.474878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.487273  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:13:37.500384  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.513109  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.533296  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.546472  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:13:37.557495  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:13:37.557598  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:13:37.573136  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:13:37.584661  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:37.701023  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:13:37.798120  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:13:37.798207  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:13:37.803954  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:13:37.804021  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:13:37.808938  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:13:37.851814  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:13:37.851905  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.881347  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.916129  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:13:37.917503  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:13:37.918841  150723 out.go:177]   - env NO_PROXY=192.168.39.206,192.168.39.15
	I1028 11:13:37.920060  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:37.923080  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923530  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:37.923560  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923801  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:13:37.928489  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:37.944276  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:13:37.944540  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:37.944876  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.944917  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.960868  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I1028 11:13:37.961448  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.961978  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.962000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.962320  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.962554  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:13:37.964176  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:37.964500  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.964546  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.980099  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I1028 11:13:37.980536  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.980994  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.981027  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.981316  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.981476  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:37.981636  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.44
	I1028 11:13:37.981649  150723 certs.go:194] generating shared ca certs ...
	I1028 11:13:37.981667  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:37.981815  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:13:37.981867  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:13:37.981880  150723 certs.go:256] generating profile certs ...
	I1028 11:13:37.981981  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:13:37.982024  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408
	I1028 11:13:37.982045  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.44 192.168.39.254]
	I1028 11:13:38.031818  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 ...
	I1028 11:13:38.031849  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408: {Name:mk24630c498d89b32162095507c0812c854412bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032046  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 ...
	I1028 11:13:38.032062  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408: {Name:mk38f2fd390923bb1dfc386b88fc31f22cbd1405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032164  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:13:38.032326  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:13:38.032501  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:13:38.032524  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:13:38.032548  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:13:38.032568  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:13:38.032585  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:13:38.032605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:13:38.032622  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:13:38.032641  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:13:38.045605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:13:38.045699  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:13:38.045758  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:13:38.045774  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:13:38.045809  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:13:38.045836  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:13:38.045857  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:13:38.045912  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:38.045950  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.045974  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.045992  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.046044  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:38.049011  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049464  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:38.049485  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049679  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:38.049889  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:38.050031  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:38.050163  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:38.129875  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:13:38.135272  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:13:38.146812  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:13:38.151195  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:13:38.162579  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:13:38.167018  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:13:38.178835  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:13:38.183162  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:13:38.195172  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:13:38.199929  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:13:38.212017  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:13:38.216559  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:13:38.228337  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:13:38.256831  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:13:38.282349  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:13:38.312381  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:13:38.340368  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:13:38.368852  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:13:38.396585  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:13:38.425195  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:13:38.453101  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:13:38.479115  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:13:38.505463  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:13:38.531445  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:13:38.550676  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:13:38.570134  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:13:38.588413  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:13:38.606756  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:13:38.626726  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:13:38.646275  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:13:38.665976  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:13:38.672176  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:13:38.685017  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690136  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690209  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.697711  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:13:38.712239  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:13:38.725832  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730869  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730941  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.737271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:13:38.751047  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:13:38.763980  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769518  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769615  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.776609  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:13:38.791196  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:13:38.796201  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:13:38.796261  150723 kubeadm.go:934] updating node {m03 192.168.39.44 8443 v1.31.2 crio true true} ...
	I1028 11:13:38.796362  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:13:38.796397  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:13:38.796470  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:13:38.817160  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:13:38.817224  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:13:38.817279  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.829712  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:13:38.829765  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.842596  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:13:38.842645  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:13:38.842708  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842755  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:13:38.842821  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.842886  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.849835  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:13:38.849867  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:13:38.850062  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:13:38.850096  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:13:38.869860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:38.870019  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:39.008547  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:13:39.008597  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:13:39.841044  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:13:39.851424  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:13:39.870537  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:13:39.890208  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:13:39.908650  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:13:39.913130  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:39.926430  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:40.057322  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:13:40.076284  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:40.076669  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:40.076716  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:40.094065  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I1028 11:13:40.094505  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:40.095080  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:40.095109  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:40.095526  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:40.095722  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:40.095896  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:13:40.096063  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:13:40.096090  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:40.099282  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.099834  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:40.099865  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.100013  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:40.100216  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:40.100410  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:40.100563  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:40.273359  150723 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:40.273397  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443"
	I1028 11:14:04.540358  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443": (24.266932187s)
	I1028 11:14:04.540403  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:14:05.110298  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m03 minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:14:05.258236  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:14:05.400029  150723 start.go:319] duration metric: took 25.304126551s to joinCluster
	I1028 11:14:05.400118  150723 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:14:05.400571  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:14:05.401586  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:14:05.403593  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:14:05.647217  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:14:05.664862  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:14:05.665098  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:14:05.665166  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:14:05.665399  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:05.665469  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:05.665476  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:05.665484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:05.665490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:05.669744  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.165968  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.165997  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.166009  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.166016  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.170123  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.666317  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.666416  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.666445  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.666462  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.670843  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:07.165728  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.165755  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.165768  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.165776  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.169304  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.666123  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.666154  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.666165  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.666171  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.669713  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.670892  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:08.166009  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.166031  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.166039  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.166043  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.169692  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:08.666389  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.666423  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.666436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.666446  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.671535  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:09.166494  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.166518  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.166530  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.166537  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.170858  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.665722  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.665745  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.665753  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.665762  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.670170  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.671084  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:10.165695  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.165724  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.165735  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.165742  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.173147  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:10.666401  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.666429  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.666440  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.666443  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.671830  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:11.165701  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.165722  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.165731  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.165737  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.228148  150723 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I1028 11:14:11.666333  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.666388  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.666401  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.666408  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.670186  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:11.671264  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:12.165684  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.165709  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.165715  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.165719  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.170052  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:12.666466  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.666494  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.666504  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.666509  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.670352  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:13.166382  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.166410  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.166421  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.166427  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.171235  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:13.666623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.666647  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.666656  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.666661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.670621  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.165740  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.165767  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.165783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.169178  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.170214  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:14.666184  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.666206  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.666215  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.666219  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.670466  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:15.166232  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.166261  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.166272  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.166276  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.173444  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:15.666306  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.666344  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.666348  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.670385  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:16.166429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.166461  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.166474  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.166481  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.170181  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:16.170699  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:16.665698  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.665723  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.665730  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.665734  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.669776  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:17.165640  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.165664  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.165672  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.165676  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.169368  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:17.666177  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.666202  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.666210  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.666214  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.670134  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.165917  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.165940  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.165948  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.165952  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.169496  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.665925  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.665949  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.665971  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.665976  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.669433  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.670970  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:19.165694  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.165718  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.165728  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.165732  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.170437  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:19.666095  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.666123  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.666134  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.666141  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.668970  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:20.166291  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.166314  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.166322  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.166326  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.170016  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:20.665789  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.665815  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.665822  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.665827  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.669287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.165826  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.165853  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.165862  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.165868  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.169651  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.170332  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:21.665771  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.665804  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.665816  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.665822  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.669841  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:22.166380  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.166406  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.166414  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.166420  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.169816  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:22.666341  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.666364  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.666372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.666377  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.670923  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.165737  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.165762  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.165771  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.169299  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.665765  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.665789  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.665797  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.665801  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.669697  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.670619  150723 node_ready.go:49] node "ha-928358-m03" has status "Ready":"True"
	I1028 11:14:23.670643  150723 node_ready.go:38] duration metric: took 18.005227415s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:23.670662  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:23.670813  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:23.670845  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.670858  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.670875  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.677257  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:23.683895  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.683990  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:14:23.683999  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.684007  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.684011  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688327  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.688931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.688948  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.688956  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688960  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.691787  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.692523  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.692543  150723 pod_ready.go:82] duration metric: took 8.61912ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692554  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692624  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:14:23.692632  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.692639  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.692645  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.695738  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.696515  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.696533  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.696542  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.696548  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.699472  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.700068  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.700097  150723 pod_ready.go:82] duration metric: took 7.535535ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700107  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700162  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:14:23.700171  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.700178  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.700184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.702917  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.703534  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.703550  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.703559  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.703566  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.706103  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.706650  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.706674  150723 pod_ready.go:82] duration metric: took 6.560031ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706686  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706758  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:14:23.706768  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.706778  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.706785  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.709373  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.710451  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:23.710472  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.710484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.710490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.713376  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.713980  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.714010  150723 pod_ready.go:82] duration metric: took 7.313443ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.714024  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.866359  150723 request.go:632] Waited for 152.224049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866492  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.866504  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.866516  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.871166  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.066273  150723 request.go:632] Waited for 194.358951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066350  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066361  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.066372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.066378  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.070313  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.071003  150723 pod_ready.go:93] pod "etcd-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.071021  150723 pod_ready.go:82] duration metric: took 356.990267ms for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.071039  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.266224  150723 request.go:632] Waited for 195.110039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266290  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.266298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.266303  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.271102  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.466777  150723 request.go:632] Waited for 195.051662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466835  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466840  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.466848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.466857  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.471602  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.472438  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.472458  150723 pod_ready.go:82] duration metric: took 401.411661ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.472468  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.666245  150723 request.go:632] Waited for 193.688569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666321  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.666332  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.666337  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.670192  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.866165  150723 request.go:632] Waited for 195.218003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866225  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866230  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.866237  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.866242  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.869696  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.870520  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.870539  150723 pod_ready.go:82] duration metric: took 398.065091ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.870549  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.066723  150723 request.go:632] Waited for 196.090526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066790  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066796  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.066812  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.066818  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.070840  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:25.266492  150723 request.go:632] Waited for 194.408437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266550  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266555  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.266563  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.266567  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.270440  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.271647  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.271668  150723 pod_ready.go:82] duration metric: took 401.112731ms for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.271677  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.466686  150723 request.go:632] Waited for 194.942796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466776  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466782  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.466791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.466799  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.478807  150723 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:14:25.666227  150723 request.go:632] Waited for 186.359371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666322  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.666346  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.666355  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.669950  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.670691  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.670710  150723 pod_ready.go:82] duration metric: took 399.026254ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.670723  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.866724  150723 request.go:632] Waited for 195.936368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866801  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866807  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.866814  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.866819  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.870640  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.065827  150723 request.go:632] Waited for 194.310294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065907  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065912  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.065920  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.065925  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.069699  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.070459  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.070478  150723 pod_ready.go:82] duration metric: took 399.749253ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.070489  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.266701  150723 request.go:632] Waited for 196.138179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266792  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266809  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.266825  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.266832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.270679  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.466081  150723 request.go:632] Waited for 194.361983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466174  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466182  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.466194  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.466206  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.470252  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:26.470784  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.470804  150723 pod_ready.go:82] duration metric: took 400.309396ms for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.470815  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.665844  150723 request.go:632] Waited for 194.95975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665902  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665925  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.665956  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.665963  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.669385  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.866618  150723 request.go:632] Waited for 196.393847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866674  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866679  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.866687  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.866690  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.870012  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.870701  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.870720  150723 pod_ready.go:82] duration metric: took 399.898606ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.870734  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.065775  150723 request.go:632] Waited for 194.965869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065845  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065850  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.065858  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.065865  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.069945  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.266078  150723 request.go:632] Waited for 195.378208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266154  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266159  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.266167  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.266174  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.269961  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:27.270605  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.270625  150723 pod_ready.go:82] duration metric: took 399.882701ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.270640  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.466435  150723 request.go:632] Waited for 195.719587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466503  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466511  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.466550  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.466562  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.473780  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:27.666214  150723 request.go:632] Waited for 191.347069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666284  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666291  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.666298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.666302  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.670820  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.671554  150723 pod_ready.go:93] pod "kube-proxy-np8x5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.671578  150723 pod_ready.go:82] duration metric: took 400.929643ms for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.671589  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.866741  150723 request.go:632] Waited for 195.08002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866814  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866821  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.866832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.866843  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.870682  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.066337  150723 request.go:632] Waited for 194.812157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066403  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066408  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.066416  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.066420  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.069743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.070462  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.070483  150723 pod_ready.go:82] duration metric: took 398.887712ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.070497  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.265961  150723 request.go:632] Waited for 195.392733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266039  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266047  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.266057  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.266088  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.269740  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.465851  150723 request.go:632] Waited for 195.318291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465937  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.465949  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.465957  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.470812  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:28.471696  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.471720  150723 pod_ready.go:82] duration metric: took 401.210524ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.471733  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.665763  150723 request.go:632] Waited for 193.940561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665854  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665869  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.665877  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.665883  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.669746  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.866768  150723 request.go:632] Waited for 196.382736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866827  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866832  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.866840  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.866844  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.870665  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.871107  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.871125  150723 pod_ready.go:82] duration metric: took 399.382061ms for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.871136  150723 pod_ready.go:39] duration metric: took 5.200463354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:28.871154  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:14:28.871205  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:14:28.894991  150723 api_server.go:72] duration metric: took 23.494825881s to wait for apiserver process to appear ...
	I1028 11:14:28.895029  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:14:28.895053  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:14:28.901769  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:14:28.901850  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:14:28.901857  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.901868  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.901879  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.903049  150723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:14:28.903133  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:14:28.903153  150723 api_server.go:131] duration metric: took 8.11544ms to wait for apiserver health ...
	I1028 11:14:28.903164  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:14:29.066557  150723 request.go:632] Waited for 163.310035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066628  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.066650  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.066657  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.073405  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.079996  150723 system_pods.go:59] 24 kube-system pods found
	I1028 11:14:29.080029  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.080039  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.080043  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.080047  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.080050  150723 system_pods.go:61] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.080053  150723 system_pods.go:61] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.080056  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.080062  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.080065  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.080068  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.080071  150723 system_pods.go:61] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.080075  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.080079  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.080085  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.080089  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.080094  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.080099  150723 system_pods.go:61] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.080103  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.080109  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.080117  150723 system_pods.go:61] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.080124  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.080135  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.080139  150723 system_pods.go:61] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.080142  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.080148  150723 system_pods.go:74] duration metric: took 176.977613ms to wait for pod list to return data ...
	I1028 11:14:29.080159  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:14:29.266599  150723 request.go:632] Waited for 186.363794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266653  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266658  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.266665  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.266669  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.271060  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.271213  150723 default_sa.go:45] found service account: "default"
	I1028 11:14:29.271235  150723 default_sa.go:55] duration metric: took 191.069027ms for default service account to be created ...
	I1028 11:14:29.271247  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:14:29.466315  150723 request.go:632] Waited for 194.981882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466408  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466421  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.466436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.466448  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.472918  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.481266  150723 system_pods.go:86] 24 kube-system pods found
	I1028 11:14:29.481302  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.481308  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.481312  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.481316  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.481320  150723 system_pods.go:89] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.481324  150723 system_pods.go:89] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.481327  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.481330  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.481333  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.481336  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.481339  150723 system_pods.go:89] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.481343  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.481346  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.481350  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.481354  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.481359  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.481362  150723 system_pods.go:89] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.481364  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.481368  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.481372  150723 system_pods.go:89] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.481378  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.481382  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.481388  150723 system_pods.go:89] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.481392  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.481402  150723 system_pods.go:126] duration metric: took 210.146699ms to wait for k8s-apps to be running ...
	I1028 11:14:29.481415  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:14:29.481478  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:14:29.499294  150723 system_svc.go:56] duration metric: took 17.867458ms WaitForService to wait for kubelet
	I1028 11:14:29.499345  150723 kubeadm.go:582] duration metric: took 24.099188581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:14:29.499369  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:14:29.666183  150723 request.go:632] Waited for 166.698659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666244  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666250  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.666258  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.666262  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.670701  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.671840  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671859  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671869  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671873  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671877  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671880  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671883  150723 node_conditions.go:105] duration metric: took 172.509467ms to run NodePressure ...
	I1028 11:14:29.671895  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:14:29.671914  150723 start.go:255] writing updated cluster config ...
	I1028 11:14:29.672186  150723 ssh_runner.go:195] Run: rm -f paused
	I1028 11:14:29.727881  150723 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:14:29.729936  150723 out.go:177] * Done! kubectl is now configured to use "ha-928358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.887539982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114298887517312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00fabe64-18d9-4ca1-ad63-3dd78286f32d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.888266925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=573e24c3-7d6b-45a1-a3a9-8f3bc8c80dcb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.888318754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=573e24c3-7d6b-45a1-a3a9-8f3bc8c80dcb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.888537108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=573e24c3-7d6b-45a1-a3a9-8f3bc8c80dcb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.929917596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ee1cdfe-06d3-419d-ab20-b882ce465b98 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.930038795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ee1cdfe-06d3-419d-ab20-b882ce465b98 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.931222824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09fa11b9-dfb7-43b9-93df-15acb1b4f6c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.931671712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114298931646183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09fa11b9-dfb7-43b9-93df-15acb1b4f6c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.932287808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0631cb5a-d9a8-47e5-8253-fe4252234e3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.932366764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0631cb5a-d9a8-47e5-8253-fe4252234e3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.932614085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0631cb5a-d9a8-47e5-8253-fe4252234e3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.972541969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4ed20df-4ba3-4060-a7dd-cb427f74b0df name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.972643582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4ed20df-4ba3-4060-a7dd-cb427f74b0df name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.973841982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=472d4d02-a3d5-4978-9328-a53ffb78f0a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.974373730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114298974349190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=472d4d02-a3d5-4978-9328-a53ffb78f0a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.974759927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=893e759a-9415-4fc8-a1ff-e7daee663456 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.974842262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=893e759a-9415-4fc8-a1ff-e7daee663456 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:18 ha-928358 crio[664]: time="2024-10-28 11:18:18.975129745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=893e759a-9415-4fc8-a1ff-e7daee663456 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.018856643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a2fc7b3-7b8a-44ea-86b6-00ecb8d7b2a9 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.018939513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a2fc7b3-7b8a-44ea-86b6-00ecb8d7b2a9 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.020902991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08efa66c-56fc-498e-82c7-0c43a9b049df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.021532648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114299021508003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08efa66c-56fc-498e-82c7-0c43a9b049df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.022702155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18bc8485-e5d2-41be-83da-a9923859ad35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.022760264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18bc8485-e5d2-41be-83da-a9923859ad35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:19 ha-928358 crio[664]: time="2024-10-28 11:18:19.022975909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18bc8485-e5d2-41be-83da-a9923859ad35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	678eb45e28d22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   6fcf4a6026d95       busybox-7dff88458-dnw8z
	267b822906895       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   554c79cdc22b7       coredns-7c65d6cfc9-gnm9r
	0ec81022134ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b55f959c9e26e       coredns-7c65d6cfc9-xxxgw
	101876df5ba49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   cc9b8c6075292       storage-provisioner
	93fda9ea564e1       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   af0a9858b9f50       kindnet-pq9gp
	6af78d85866c9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f07333184a007       kube-proxy-8fxdn
	b4500f47684e6       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   aef8ad820f733       kube-vip-ha-928358
	a75ab3d16aba2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   841e8a03bb9b3       etcd-ha-928358
	f8221151573cf       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   1975c249cdfee       kube-apiserver-ha-928358
	e735b7e201a7d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2efa4330e0881       kube-controller-manager-ha-928358
	1be8f3556358e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   041b17e002580       kube-scheduler-ha-928358
	
	
	==> coredns [0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962] <==
	[INFO] 10.244.2.2:54221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001644473s
	[INFO] 10.244.2.2:58493 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00055293s
	[INFO] 10.244.1.2:59466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000373197s
	[INFO] 10.244.1.2:59196 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002135371s
	[INFO] 10.244.0.4:48789 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140504s
	[INFO] 10.244.0.4:43613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168237s
	[INFO] 10.244.0.4:38143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.016935286s
	[INFO] 10.244.0.4:39110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177298s
	[INFO] 10.244.2.2:46780 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169863s
	[INFO] 10.244.2.2:56782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002009621s
	[INFO] 10.244.2.2:39525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138628s
	[INFO] 10.244.2.2:53832 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216458s
	[INFO] 10.244.1.2:39727 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226061s
	[INFO] 10.244.1.2:60944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001495416s
	[INFO] 10.244.1.2:36506 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119701s
	[INFO] 10.244.1.2:59657 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001674s
	[INFO] 10.244.0.4:50368 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178977s
	[INFO] 10.244.0.4:47562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089999s
	[INFO] 10.244.1.2:44983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013645s
	[INFO] 10.244.1.2:33581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164661s
	[INFO] 10.244.1.2:39245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099456s
	[INFO] 10.244.0.4:48286 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018935s
	[INFO] 10.244.0.4:33651 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163132s
	[INFO] 10.244.2.2:57361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144876s
	[INFO] 10.244.2.2:38124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021886s
	
	
	==> coredns [267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134] <==
	[INFO] 10.244.0.4:46197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168175s
	[INFO] 10.244.0.4:43404 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138086s
	[INFO] 10.244.2.2:42078 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211245s
	[INFO] 10.244.2.2:43818 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001478975s
	[INFO] 10.244.2.2:36869 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148567s
	[INFO] 10.244.2.2:38696 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110904s
	[INFO] 10.244.1.2:53013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000625096s
	[INFO] 10.244.1.2:57247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002184098s
	[INFO] 10.244.1.2:60298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097712s
	[INFO] 10.244.1.2:42104 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099517s
	[INFO] 10.244.0.4:43344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166235s
	[INFO] 10.244.0.4:39756 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110369s
	[INFO] 10.244.2.2:51568 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132969s
	[INFO] 10.244.2.2:39038 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106245s
	[INFO] 10.244.2.2:36223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090887s
	[INFO] 10.244.2.2:53817 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077711s
	[INFO] 10.244.1.2:45611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112879s
	[INFO] 10.244.0.4:48292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126001s
	[INFO] 10.244.0.4:49134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000314244s
	[INFO] 10.244.2.2:38137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166744s
	[INFO] 10.244.2.2:49391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218881s
	[INFO] 10.244.1.2:58619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152475s
	[INFO] 10.244.1.2:59879 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000283359s
	[INFO] 10.244.1.2:33696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103786s
	[INFO] 10.244.1.2:41150 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120227s
	
	
	==> describe nodes <==
	Name:               ha-928358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:12:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-928358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3063a9eb16b941929fe95ea9deb85942
	  System UUID:                3063a9eb-16b9-4192-9fe9-5ea9deb85942
	  Boot ID:                    4750ce27-a752-459c-82e1-f46d3ba9e4fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dnw8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 coredns-7c65d6cfc9-gnm9r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 coredns-7c65d6cfc9-xxxgw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 etcd-ha-928358                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m34s
	  kube-system                 kindnet-pq9gp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m29s
	  kube-system                 kube-apiserver-ha-928358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-928358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-8fxdn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-scheduler-ha-928358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-928358                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s  kubelet          Node ha-928358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s  kubelet          Node ha-928358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s  kubelet          Node ha-928358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  NodeReady                6m17s  kubelet          Node ha-928358 status is now: NodeReady
	  Normal  RegisteredNode           5m24s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	
	
	Name:               ha-928358-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:12:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:15:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-928358-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb0972414207466c8358559557f25b09
	  System UUID:                fb097241-4207-466c-8358-559557f25b09
	  Boot ID:                    69b9f603-4134-42b4-a3f9-eeae845c3c91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx5tk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-928358-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-j4vj5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-928358-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-928358-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-cfhp5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-928358-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-928358-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-928358-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-928358-m02 status is now: NodeNotReady
	
	
	Name:               ha-928358-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-928358-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf69c3934784b66bc2bf05f458d71ba
	  System UUID:                ebf69c39-3478-4b66-bc2b-f05f458d71ba
	  Boot ID:                    2e5043ad-620d-4233-b866-677c45434de6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8ctp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-928358-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-9k2mz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-928358-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-ha-928358-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-np8x5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-928358-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-928358-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-928358-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	
	
	Name:               ha-928358-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_15_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-928358-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ee6c88b1c8c4fa2aebbfe4047465ead
	  System UUID:                6ee6c88b-1c8c-4fa2-aebb-fe4047465ead
	  Boot ID:                    b70ab214-29c9-4d90-9700-0ff1df9971f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k2ddr       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m10s
	  kube-system                 kube-proxy-fl4b7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m10s (x2 over 3m10s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s (x2 over 3m10s)  kubelet          Node ha-928358-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s (x2 over 3m10s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m5s                   node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  NodeReady                2m48s                  kubelet          Node ha-928358-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053627] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.945749] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.924544] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.657378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.658005] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.063082] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059947] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.199848] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.133132] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.303491] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.303698] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.055659] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.938074] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +1.148998] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.072047] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087002] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.352589] kauditd_printk_skb: 21 callbacks suppressed
	[Oct28 11:12] kauditd_printk_skb: 38 callbacks suppressed
	[ +49.929447] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854] <==
	{"level":"warn","ts":"2024-10-28T11:18:19.531693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.535596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.544188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.551051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.557537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.558664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.565248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.569641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.577941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.586511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.594534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.600364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.600920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.606152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.615885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.622509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.628833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.634775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.639111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.643363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.645939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.647060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.657498Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.666689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:19.700963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:19 up 7 min,  0 users,  load average: 0.59, 0.53, 0.29
	Linux ha-928358 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a] <==
	I1028 11:17:42.317978       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:17:52.309469       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:17:52.309528       1 main.go:300] handling current node
	I1028 11:17:52.309550       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:17:52.309558       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:17:52.309929       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:17:52.309971       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:17:52.310797       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:17:52.310848       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:02.315389       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:02.315498       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:02.315666       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:02.315707       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:02.315812       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:02.315836       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:02.315914       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:02.315935       1 main.go:300] handling current node
	I1028 11:18:12.318153       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:12.318184       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:12.318402       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:12.318430       1 main.go:300] handling current node
	I1028 11:18:12.318441       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:12.318446       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:12.318605       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:12.318645       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52] <==
	I1028 11:11:44.249575       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1028 11:11:44.264324       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1028 11:11:44.266721       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 11:11:44.273696       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:11:44.441833       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:11:45.375393       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:11:45.401215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:11:45.422922       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:11:50.040543       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:11:50.160325       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:14:35.737044       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49680: use of closed network connection
	E1028 11:14:35.939412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49710: use of closed network connection
	E1028 11:14:36.137760       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49736: use of closed network connection
	E1028 11:14:36.353242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49742: use of closed network connection
	E1028 11:14:36.573304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49764: use of closed network connection
	E1028 11:14:36.795811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49780: use of closed network connection
	E1028 11:14:36.981176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49798: use of closed network connection
	E1028 11:14:37.177919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49830: use of closed network connection
	E1028 11:14:37.363976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49844: use of closed network connection
	E1028 11:14:37.667823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49884: use of closed network connection
	E1028 11:14:37.860879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49906: use of closed network connection
	E1028 11:14:38.044254       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49922: use of closed network connection
	E1028 11:14:38.230562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49930: use of closed network connection
	E1028 11:14:38.433175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49954: use of closed network connection
	E1028 11:14:38.620514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49974: use of closed network connection
	
	
	==> kube-controller-manager [e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef] <==
	I1028 11:15:02.129745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m03"
	E1028 11:15:09.422518       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8k978 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8k978\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1028 11:15:09.795491       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-928358-m04\" does not exist"
	I1028 11:15:09.833650       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-928358-m04" podCIDRs=["10.244.3.0/24"]
	I1028 11:15:09.833720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:09.833754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.048409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.186481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.510390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.501689       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-928358-m04"
	I1028 11:15:14.502311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.708709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:20.001285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.204169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:15:31.204768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.224821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:34.519983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:40.626763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:16:34.553439       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:16:34.556249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.585375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.698936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.004399ms"
	I1028 11:16:34.699212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.024µs"
	I1028 11:16:35.153194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:39.778629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	
	
	==> kube-proxy [6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:11:50.898284       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:11:50.922359       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E1028 11:11:50.922435       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:11:51.064127       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:11:51.064169       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:11:51.064206       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:11:51.084457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:11:51.088588       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:11:51.088608       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:11:51.098854       1 config.go:199] "Starting service config controller"
	I1028 11:11:51.099108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:11:51.099342       1 config.go:328] "Starting node config controller"
	I1028 11:11:51.099355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:11:51.122226       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:11:51.122243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:11:51.199431       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:11:51.199505       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:11:51.222697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583] <==
	W1028 11:11:43.540244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.540296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.541960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:11:43.542068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.589795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:11:43.589913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.666909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.667067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.681223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:11:43.681426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.721299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:11:43.721931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.811114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.811345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:11:46.351113       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:15:09.905243       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.908212       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1733f64f-2a73-414c-a048-b4ad6b9bd117(kube-system/kindnet-k2ddr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k2ddr"
	E1028 11:15:09.910352       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" pod="kube-system/kindnet-k2ddr"
	I1028 11:15:09.910453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.907070       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.910582       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 48c26642-8d42-43a1-ad06-ba9408499bf8(kube-system/kube-proxy-fl4b7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fl4b7"
	E1028 11:15:09.910623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-fl4b7"
	I1028 11:15:09.910661       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.930971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tswkg" node="ha-928358-m04"
	E1028 11:15:09.931171       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-tswkg"
	
	
	==> kubelet <==
	Oct 28 11:16:45 ha-928358 kubelet[1312]: E1028 11:16:45.513274    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114205512809475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:45 ha-928358 kubelet[1312]: E1028 11:16:45.513333    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114205512809475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.514793    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.515166    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.516628    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.517193    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518657    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518678    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532318    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532805    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534490    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534569    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.349514    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536867    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536910    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539160    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539208    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540899    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540940    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:15 ha-928358 kubelet[1312]: E1028 11:18:15.543044    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114295542712895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:15 ha-928358 kubelet[1312]: E1028 11:18:15.543124    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114295542712895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-928358 -n ha-928358
helpers_test.go:261: (dbg) Run:  kubectl --context ha-928358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (6.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr: (3.558721163s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-928358 -n ha-928358
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 logs -n 25: (1.491338182s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m03_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m04 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp testdata/cp-test.txt                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m03 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-928358 node stop m02 -v=7                                                    | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-928358 node start m02 -v=7                                                   | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:10:59
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:10:59.463321  150723 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:10:59.463437  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463447  150723 out.go:358] Setting ErrFile to fd 2...
	I1028 11:10:59.463453  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463619  150723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:10:59.464198  150723 out.go:352] Setting JSON to false
	I1028 11:10:59.465062  150723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3202,"bootTime":1730110657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:10:59.465170  150723 start.go:139] virtualization: kvm guest
	I1028 11:10:59.467541  150723 out.go:177] * [ha-928358] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:10:59.469144  150723 notify.go:220] Checking for updates...
	I1028 11:10:59.469164  150723 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:10:59.470932  150723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:10:59.472579  150723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:10:59.474106  150723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.476022  150723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:10:59.477386  150723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:10:59.478873  150723 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:10:59.515106  150723 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:10:59.516643  150723 start.go:297] selected driver: kvm2
	I1028 11:10:59.516662  150723 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:10:59.516677  150723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:10:59.517412  150723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.517509  150723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:10:59.533665  150723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:10:59.533714  150723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:10:59.533960  150723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:10:59.533991  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:10:59.534033  150723 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:10:59.534056  150723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:10:59.534109  150723 start.go:340] cluster config:
	{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:10:59.534204  150723 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.536334  150723 out.go:177] * Starting "ha-928358" primary control-plane node in "ha-928358" cluster
	I1028 11:10:59.537748  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:10:59.537794  150723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:10:59.537802  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:10:59.537881  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:10:59.537891  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:10:59.538184  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:10:59.538208  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json: {Name:mkb8dad6cb32a1c4cc26cae85e4e9234d9821c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:10:59.538374  150723 start.go:360] acquireMachinesLock for ha-928358: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:10:59.538406  150723 start.go:364] duration metric: took 16.963µs to acquireMachinesLock for "ha-928358"
	I1028 11:10:59.538425  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:10:59.538479  150723 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:10:59.540050  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:10:59.540188  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:59.540238  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:59.555032  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I1028 11:10:59.555455  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:59.555961  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:10:59.556000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:59.556420  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:59.556590  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:10:59.556764  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:10:59.556945  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:10:59.556977  150723 client.go:168] LocalClient.Create starting
	I1028 11:10:59.557015  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:10:59.557068  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557092  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557167  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:10:59.557195  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557226  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557253  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:10:59.557273  150723 main.go:141] libmachine: (ha-928358) Calling .PreCreateCheck
	I1028 11:10:59.557662  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:10:59.558063  150723 main.go:141] libmachine: Creating machine...
	I1028 11:10:59.558080  150723 main.go:141] libmachine: (ha-928358) Calling .Create
	I1028 11:10:59.558226  150723 main.go:141] libmachine: (ha-928358) Creating KVM machine...
	I1028 11:10:59.559811  150723 main.go:141] libmachine: (ha-928358) DBG | found existing default KVM network
	I1028 11:10:59.560481  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.560340  150746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I1028 11:10:59.560504  150723 main.go:141] libmachine: (ha-928358) DBG | created network xml: 
	I1028 11:10:59.560515  150723 main.go:141] libmachine: (ha-928358) DBG | <network>
	I1028 11:10:59.560521  150723 main.go:141] libmachine: (ha-928358) DBG |   <name>mk-ha-928358</name>
	I1028 11:10:59.560530  150723 main.go:141] libmachine: (ha-928358) DBG |   <dns enable='no'/>
	I1028 11:10:59.560536  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560547  150723 main.go:141] libmachine: (ha-928358) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:10:59.560555  150723 main.go:141] libmachine: (ha-928358) DBG |     <dhcp>
	I1028 11:10:59.560564  150723 main.go:141] libmachine: (ha-928358) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:10:59.560572  150723 main.go:141] libmachine: (ha-928358) DBG |     </dhcp>
	I1028 11:10:59.560581  150723 main.go:141] libmachine: (ha-928358) DBG |   </ip>
	I1028 11:10:59.560587  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560595  150723 main.go:141] libmachine: (ha-928358) DBG | </network>
	I1028 11:10:59.560601  150723 main.go:141] libmachine: (ha-928358) DBG | 
	I1028 11:10:59.566260  150723 main.go:141] libmachine: (ha-928358) DBG | trying to create private KVM network mk-ha-928358 192.168.39.0/24...
	I1028 11:10:59.635650  150723 main.go:141] libmachine: (ha-928358) DBG | private KVM network mk-ha-928358 192.168.39.0/24 created
	I1028 11:10:59.635720  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.635608  150746 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.635745  150723 main.go:141] libmachine: (ha-928358) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.635835  150723 main.go:141] libmachine: (ha-928358) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:10:59.635904  150723 main.go:141] libmachine: (ha-928358) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:10:59.913193  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.913037  150746 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa...
	I1028 11:10:59.999912  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999757  150746 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk...
	I1028 11:10:59.999940  150723 main.go:141] libmachine: (ha-928358) DBG | Writing magic tar header
	I1028 11:10:59.999950  150723 main.go:141] libmachine: (ha-928358) DBG | Writing SSH key tar header
	I1028 11:10:59.999957  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999874  150746 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.999966  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358
	I1028 11:11:00.000011  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 (perms=drwx------)
	I1028 11:11:00.000025  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:00.000035  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:00.000055  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:00.000076  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:00.000090  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:00.000108  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:00.000117  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:00.000127  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:00.000138  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home
	I1028 11:11:00.000147  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:00.000160  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:00.000177  150723 main.go:141] libmachine: (ha-928358) DBG | Skipping /home - not owner
	I1028 11:11:00.000190  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:00.001605  150723 main.go:141] libmachine: (ha-928358) define libvirt domain using xml: 
	I1028 11:11:00.001643  150723 main.go:141] libmachine: (ha-928358) <domain type='kvm'>
	I1028 11:11:00.001657  150723 main.go:141] libmachine: (ha-928358)   <name>ha-928358</name>
	I1028 11:11:00.001672  150723 main.go:141] libmachine: (ha-928358)   <memory unit='MiB'>2200</memory>
	I1028 11:11:00.001685  150723 main.go:141] libmachine: (ha-928358)   <vcpu>2</vcpu>
	I1028 11:11:00.001693  150723 main.go:141] libmachine: (ha-928358)   <features>
	I1028 11:11:00.001703  150723 main.go:141] libmachine: (ha-928358)     <acpi/>
	I1028 11:11:00.001711  150723 main.go:141] libmachine: (ha-928358)     <apic/>
	I1028 11:11:00.001724  150723 main.go:141] libmachine: (ha-928358)     <pae/>
	I1028 11:11:00.001748  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.001760  150723 main.go:141] libmachine: (ha-928358)   </features>
	I1028 11:11:00.001770  150723 main.go:141] libmachine: (ha-928358)   <cpu mode='host-passthrough'>
	I1028 11:11:00.001783  150723 main.go:141] libmachine: (ha-928358)   
	I1028 11:11:00.001795  150723 main.go:141] libmachine: (ha-928358)   </cpu>
	I1028 11:11:00.001806  150723 main.go:141] libmachine: (ha-928358)   <os>
	I1028 11:11:00.001820  150723 main.go:141] libmachine: (ha-928358)     <type>hvm</type>
	I1028 11:11:00.001839  150723 main.go:141] libmachine: (ha-928358)     <boot dev='cdrom'/>
	I1028 11:11:00.001851  150723 main.go:141] libmachine: (ha-928358)     <boot dev='hd'/>
	I1028 11:11:00.001863  150723 main.go:141] libmachine: (ha-928358)     <bootmenu enable='no'/>
	I1028 11:11:00.001872  150723 main.go:141] libmachine: (ha-928358)   </os>
	I1028 11:11:00.001884  150723 main.go:141] libmachine: (ha-928358)   <devices>
	I1028 11:11:00.001898  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='cdrom'>
	I1028 11:11:00.001919  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/boot2docker.iso'/>
	I1028 11:11:00.001933  150723 main.go:141] libmachine: (ha-928358)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:00.001968  150723 main.go:141] libmachine: (ha-928358)       <readonly/>
	I1028 11:11:00.001991  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002008  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='disk'>
	I1028 11:11:00.002023  150723 main.go:141] libmachine: (ha-928358)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:00.002044  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk'/>
	I1028 11:11:00.002058  150723 main.go:141] libmachine: (ha-928358)       <target dev='hda' bus='virtio'/>
	I1028 11:11:00.002070  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002106  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002133  150723 main.go:141] libmachine: (ha-928358)       <source network='mk-ha-928358'/>
	I1028 11:11:00.002148  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002159  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002172  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002179  150723 main.go:141] libmachine: (ha-928358)       <source network='default'/>
	I1028 11:11:00.002190  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002197  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002206  150723 main.go:141] libmachine: (ha-928358)     <serial type='pty'>
	I1028 11:11:00.002210  150723 main.go:141] libmachine: (ha-928358)       <target port='0'/>
	I1028 11:11:00.002216  150723 main.go:141] libmachine: (ha-928358)     </serial>
	I1028 11:11:00.002226  150723 main.go:141] libmachine: (ha-928358)     <console type='pty'>
	I1028 11:11:00.002250  150723 main.go:141] libmachine: (ha-928358)       <target type='serial' port='0'/>
	I1028 11:11:00.002282  150723 main.go:141] libmachine: (ha-928358)     </console>
	I1028 11:11:00.002291  150723 main.go:141] libmachine: (ha-928358)     <rng model='virtio'>
	I1028 11:11:00.002297  150723 main.go:141] libmachine: (ha-928358)       <backend model='random'>/dev/random</backend>
	I1028 11:11:00.002303  150723 main.go:141] libmachine: (ha-928358)     </rng>
	I1028 11:11:00.002306  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002311  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002318  150723 main.go:141] libmachine: (ha-928358)   </devices>
	I1028 11:11:00.002323  150723 main.go:141] libmachine: (ha-928358) </domain>
	I1028 11:11:00.002328  150723 main.go:141] libmachine: (ha-928358) 
	I1028 11:11:00.006810  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:30:04:d3 in network default
	I1028 11:11:00.007391  150723 main.go:141] libmachine: (ha-928358) Ensuring networks are active...
	I1028 11:11:00.007412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:00.008229  150723 main.go:141] libmachine: (ha-928358) Ensuring network default is active
	I1028 11:11:00.008655  150723 main.go:141] libmachine: (ha-928358) Ensuring network mk-ha-928358 is active
	I1028 11:11:00.009320  150723 main.go:141] libmachine: (ha-928358) Getting domain xml...
	I1028 11:11:00.010062  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:01.218137  150723 main.go:141] libmachine: (ha-928358) Waiting to get IP...
	I1028 11:11:01.218922  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.219337  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.219385  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.219330  150746 retry.go:31] will retry after 310.252899ms: waiting for machine to come up
	I1028 11:11:01.530950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.531414  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.531437  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.531371  150746 retry.go:31] will retry after 282.464528ms: waiting for machine to come up
	I1028 11:11:01.815720  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.816159  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.816184  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.816121  150746 retry.go:31] will retry after 304.583775ms: waiting for machine to come up
	I1028 11:11:02.122718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.123224  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.123251  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.123154  150746 retry.go:31] will retry after 442.531578ms: waiting for machine to come up
	I1028 11:11:02.566777  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.567197  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.567222  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.567162  150746 retry.go:31] will retry after 677.799642ms: waiting for machine to come up
	I1028 11:11:03.246160  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.246663  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.246691  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.246611  150746 retry.go:31] will retry after 661.382392ms: waiting for machine to come up
	I1028 11:11:03.909443  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.909955  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.910006  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.909898  150746 retry.go:31] will retry after 1.086932803s: waiting for machine to come up
	I1028 11:11:04.997802  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:04.998295  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:04.998322  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:04.998231  150746 retry.go:31] will retry after 1.028978753s: waiting for machine to come up
	I1028 11:11:06.028312  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:06.028699  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:06.028724  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:06.028658  150746 retry.go:31] will retry after 1.229241603s: waiting for machine to come up
	I1028 11:11:07.259043  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:07.259415  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:07.259442  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:07.259356  150746 retry.go:31] will retry after 1.621101278s: waiting for machine to come up
	I1028 11:11:08.882760  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:08.883130  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:08.883166  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:08.883106  150746 retry.go:31] will retry after 2.010099388s: waiting for machine to come up
	I1028 11:11:10.894594  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:10.895005  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:10.895028  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:10.894965  150746 retry.go:31] will retry after 2.268994964s: waiting for machine to come up
	I1028 11:11:13.166469  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:13.166906  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:13.166930  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:13.166853  150746 retry.go:31] will retry after 2.964491157s: waiting for machine to come up
	I1028 11:11:16.134568  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:16.135014  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:16.135030  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:16.134978  150746 retry.go:31] will retry after 3.669669561s: waiting for machine to come up
	I1028 11:11:19.805844  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:19.806451  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:19.806483  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:19.806402  150746 retry.go:31] will retry after 6.986761695s: waiting for machine to come up
	I1028 11:11:26.796618  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797199  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has current primary IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797228  150723 main.go:141] libmachine: (ha-928358) Found IP for machine: 192.168.39.206
	I1028 11:11:26.797258  150723 main.go:141] libmachine: (ha-928358) Reserving static IP address...
	I1028 11:11:26.797624  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find host DHCP lease matching {name: "ha-928358", mac: "52:54:00:dd:b2:b7", ip: "192.168.39.206"} in network mk-ha-928358
	I1028 11:11:26.873582  150723 main.go:141] libmachine: (ha-928358) Reserved static IP address: 192.168.39.206
	I1028 11:11:26.873609  150723 main.go:141] libmachine: (ha-928358) Waiting for SSH to be available...
	I1028 11:11:26.873619  150723 main.go:141] libmachine: (ha-928358) DBG | Getting to WaitForSSH function...
	I1028 11:11:26.876283  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876750  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:26.876781  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876886  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH client type: external
	I1028 11:11:26.876901  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa (-rw-------)
	I1028 11:11:26.876929  150723 main.go:141] libmachine: (ha-928358) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:11:26.876941  150723 main.go:141] libmachine: (ha-928358) DBG | About to run SSH command:
	I1028 11:11:26.876952  150723 main.go:141] libmachine: (ha-928358) DBG | exit 0
	I1028 11:11:27.009708  150723 main.go:141] libmachine: (ha-928358) DBG | SSH cmd err, output: <nil>: 
	I1028 11:11:27.010071  150723 main.go:141] libmachine: (ha-928358) KVM machine creation complete!
	I1028 11:11:27.010352  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:27.010925  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011146  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011301  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:11:27.011311  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:27.012679  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:11:27.012693  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:11:27.012699  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:11:27.012704  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.014867  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015214  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.015263  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015327  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.015507  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015644  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015739  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.015911  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.016106  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.016117  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:11:27.128876  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.128903  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:11:27.128915  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.131646  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132081  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.132109  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132331  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.132525  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132697  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.133070  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.133229  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.133242  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:11:27.250569  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:11:27.250647  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:11:27.250657  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:11:27.250664  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.250929  150723 buildroot.go:166] provisioning hostname "ha-928358"
	I1028 11:11:27.250971  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.251130  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.253765  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254120  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.254146  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254297  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.254451  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254601  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254758  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.254909  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.255102  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.255118  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358 && echo "ha-928358" | sudo tee /etc/hostname
	I1028 11:11:27.384932  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:11:27.384962  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.387904  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388215  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.388243  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388516  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.388719  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.388884  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.389002  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.389152  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.389334  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.389355  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:11:27.516473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.516502  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:11:27.516519  150723 buildroot.go:174] setting up certificates
	I1028 11:11:27.516529  150723 provision.go:84] configureAuth start
	I1028 11:11:27.516537  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.516866  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:27.519682  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520053  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.520077  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520298  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.522648  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.522984  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.523022  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.523127  150723 provision.go:143] copyHostCerts
	I1028 11:11:27.523161  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523220  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:11:27.523235  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523317  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:11:27.523418  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523442  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:11:27.523451  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523494  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:11:27.523565  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523591  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:11:27.523600  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523634  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:11:27.523699  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358 san=[127.0.0.1 192.168.39.206 ha-928358 localhost minikube]
	I1028 11:11:27.652184  150723 provision.go:177] copyRemoteCerts
	I1028 11:11:27.652239  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:11:27.652263  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.655247  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655509  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.655537  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655747  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.655942  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.656141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.656367  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:27.747959  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:11:27.748026  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:11:27.773785  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:11:27.773875  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 11:11:27.798172  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:11:27.798246  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:11:27.823795  150723 provision.go:87] duration metric: took 307.251687ms to configureAuth
	I1028 11:11:27.823824  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:11:27.823999  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:27.824098  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.826733  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827058  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.827095  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827231  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.827430  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827593  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827720  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.827882  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.828064  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.828082  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:11:28.063521  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:11:28.063544  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:11:28.063563  150723 main.go:141] libmachine: (ha-928358) Calling .GetURL
	I1028 11:11:28.064889  150723 main.go:141] libmachine: (ha-928358) DBG | Using libvirt version 6000000
	I1028 11:11:28.067440  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.067909  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.067936  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.068169  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:11:28.068184  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:11:28.068190  150723 client.go:171] duration metric: took 28.511205055s to LocalClient.Create
	I1028 11:11:28.068213  150723 start.go:167] duration metric: took 28.511273119s to libmachine.API.Create "ha-928358"
	I1028 11:11:28.068224  150723 start.go:293] postStartSetup for "ha-928358" (driver="kvm2")
	I1028 11:11:28.068234  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:11:28.068250  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.068499  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:11:28.068524  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.070718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071018  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.071047  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071207  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.071391  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.071596  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.071768  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.160093  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:11:28.164580  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:11:28.164611  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:11:28.164677  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:11:28.164753  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:11:28.164768  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:11:28.164860  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:11:28.174780  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:28.200051  150723 start.go:296] duration metric: took 131.810016ms for postStartSetup
	I1028 11:11:28.200113  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:28.200681  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.203634  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204015  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.204039  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204248  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:28.204459  150723 start.go:128] duration metric: took 28.665968765s to createHost
	I1028 11:11:28.204486  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.206915  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207241  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.207270  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207406  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.207565  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207714  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207841  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.207995  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:28.208148  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:28.208158  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:11:28.326642  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113888.306870077
	
	I1028 11:11:28.326664  150723 fix.go:216] guest clock: 1730113888.306870077
	I1028 11:11:28.326674  150723 fix.go:229] Guest: 2024-10-28 11:11:28.306870077 +0000 UTC Remote: 2024-10-28 11:11:28.204471945 +0000 UTC m=+28.781211208 (delta=102.398132ms)
	I1028 11:11:28.326699  150723 fix.go:200] guest clock delta is within tolerance: 102.398132ms
	I1028 11:11:28.326706  150723 start.go:83] releasing machines lock for "ha-928358", held for 28.788289196s
	I1028 11:11:28.326726  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.327001  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.329581  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.329968  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.330003  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.330168  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330728  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330884  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330998  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:11:28.331060  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.331115  150723 ssh_runner.go:195] Run: cat /version.json
	I1028 11:11:28.331141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.333639  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.333966  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.333994  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334015  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334246  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334387  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.334412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334416  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334585  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334627  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.334755  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334771  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.334927  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.335084  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.419255  150723 ssh_runner.go:195] Run: systemctl --version
	I1028 11:11:28.450377  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:11:28.614960  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:11:28.621690  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:11:28.621762  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:11:28.640026  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:11:28.640058  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:11:28.640161  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:11:28.657821  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:11:28.673308  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:11:28.673372  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:11:28.688651  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:11:28.704016  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:11:28.829012  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:11:28.990202  150723 docker.go:233] disabling docker service ...
	I1028 11:11:28.990264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:11:29.006016  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:11:29.019798  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:11:29.148701  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:11:29.286836  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:11:29.301306  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:11:29.321180  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:11:29.321242  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.332417  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:11:29.332516  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.344116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.355229  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.366386  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:11:29.377683  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.388680  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.406712  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.418602  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:11:29.428422  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:11:29.428489  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:11:29.442860  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:11:29.453466  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:29.587618  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:11:29.702292  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:11:29.702379  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:11:29.708037  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:11:29.708101  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:11:29.712169  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:11:29.760681  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:11:29.760781  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.793958  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.827829  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:11:29.829108  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:29.831950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832308  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:29.832337  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832530  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:11:29.837077  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:29.850764  150723 kubeadm.go:883] updating cluster {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:11:29.850982  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:29.851067  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:29.884186  150723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:11:29.884257  150723 ssh_runner.go:195] Run: which lz4
	I1028 11:11:29.888297  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:11:29.888406  150723 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:11:29.892595  150723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:11:29.892630  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:11:31.364550  150723 crio.go:462] duration metric: took 1.47616531s to copy over tarball
	I1028 11:11:31.364646  150723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:11:33.492729  150723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128048416s)
	I1028 11:11:33.492765  150723 crio.go:469] duration metric: took 2.12817379s to extract the tarball
	I1028 11:11:33.492775  150723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:11:33.530789  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:33.576388  150723 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:11:33.576418  150723 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:11:33.576428  150723 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1028 11:11:33.576525  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:11:33.576597  150723 ssh_runner.go:195] Run: crio config
	I1028 11:11:33.628433  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:33.628457  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:33.628468  150723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:11:33.628490  150723 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-928358 NodeName:ha-928358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:11:33.628623  150723 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-928358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:11:33.628649  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:11:33.628693  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:11:33.645502  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:11:33.645637  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:11:33.645712  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:11:33.657169  150723 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:11:33.657234  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:11:33.668705  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:11:33.687712  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:11:33.707287  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:11:33.725968  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:11:33.745306  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:11:33.749954  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:33.764379  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:33.885154  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:11:33.902745  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.206
	I1028 11:11:33.902769  150723 certs.go:194] generating shared ca certs ...
	I1028 11:11:33.902784  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:33.902965  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:11:33.903024  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:11:33.903039  150723 certs.go:256] generating profile certs ...
	I1028 11:11:33.903106  150723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:11:33.903126  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt with IP's: []
	I1028 11:11:34.090717  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt ...
	I1028 11:11:34.090747  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt: {Name:mk3976b6be27fc4f31aa39dbf48c0afa90955478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.090957  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key ...
	I1028 11:11:34.090981  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key: {Name:mk302db81268b764894e98d850b90eaaced7a15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.091101  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923
	I1028 11:11:34.091124  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.254]
	I1028 11:11:34.335900  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 ...
	I1028 11:11:34.335935  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923: {Name:mk0008343e6fdd7a08b2d031f0ba617f7a66f590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336144  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 ...
	I1028 11:11:34.336163  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923: {Name:mkd6c56ea43ae5fd58d0e46e3c3070e385813140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336286  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:11:34.336450  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:11:34.336537  150723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:11:34.336559  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt with IP's: []
	I1028 11:11:34.464000  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt ...
	I1028 11:11:34.464029  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt: {Name:mkb9ddbbbcf10a07648ff0910f8f6f99edd94a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464231  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key ...
	I1028 11:11:34.464247  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key: {Name:mk17d0ad23ae67dc57b4cfd6ae702fbcda30c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464343  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:11:34.464369  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:11:34.464389  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:11:34.464407  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:11:34.464422  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:11:34.464435  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:11:34.464453  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:11:34.464472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:11:34.464549  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:11:34.464601  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:11:34.464617  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:11:34.464647  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:11:34.464682  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:11:34.464714  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:11:34.464766  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:34.464809  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.464829  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.464844  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.465667  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:11:34.492761  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:11:34.519090  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:11:34.544886  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:11:34.571307  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:11:34.596836  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:11:34.622460  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:11:34.648376  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:11:34.677988  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:11:34.708308  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:11:34.732512  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:11:34.757152  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:11:34.774559  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:11:34.780665  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:11:34.792209  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797675  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797733  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.804182  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:11:34.816617  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:11:34.829067  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834000  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834062  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.840080  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:11:34.851913  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:11:34.863842  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868862  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868942  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.875065  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:11:34.888703  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:11:34.893205  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:11:34.893271  150723 kubeadm.go:392] StartCluster: {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:11:34.893354  150723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:11:34.893425  150723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:11:34.932903  150723 cri.go:89] found id: ""
	I1028 11:11:34.932974  150723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:11:34.944526  150723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:11:34.956312  150723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:11:34.967457  150723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:11:34.967484  150723 kubeadm.go:157] found existing configuration files:
	
	I1028 11:11:34.967537  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:11:34.977810  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:11:34.977875  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:11:34.988232  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:11:34.998184  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:11:34.998247  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:11:35.008728  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.018729  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:11:35.018793  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.029800  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:11:35.040304  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:11:35.040357  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:11:35.050830  150723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:11:35.164435  150723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:11:35.164499  150723 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:11:35.281374  150723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:11:35.281556  150723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:11:35.281686  150723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:11:35.294386  150723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:11:35.479371  150723 out.go:235]   - Generating certificates and keys ...
	I1028 11:11:35.479512  150723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:11:35.479602  150723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:11:35.531977  150723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:11:35.706199  150723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:11:35.805605  150723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:11:35.955545  150723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:11:36.024313  150723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:11:36.024446  150723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.166366  150723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:11:36.166553  150723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.477451  150723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:11:36.529937  150723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:11:36.764928  150723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:11:36.765199  150723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:11:36.958542  150723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:11:37.098519  150723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:11:37.432447  150723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:11:37.510265  150723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:11:37.727523  150723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:11:37.728159  150723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:11:37.734975  150723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:11:37.736761  150723 out.go:235]   - Booting up control plane ...
	I1028 11:11:37.736891  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:11:37.737036  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:11:37.737392  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:11:37.761460  150723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:11:37.769245  150723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:11:37.769327  150723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:11:37.901440  150723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:11:37.901605  150723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:11:38.403804  150723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.460314ms
	I1028 11:11:38.403927  150723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:11:44.555956  150723 kubeadm.go:310] [api-check] The API server is healthy after 6.1544774s
	I1028 11:11:44.584149  150723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:11:44.607891  150723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:11:44.647415  150723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:11:44.647602  150723 kubeadm.go:310] [mark-control-plane] Marking the node ha-928358 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:11:44.670940  150723 kubeadm.go:310] [bootstrap-token] Using token: 7u74ui.ti422fa98pbd45zp
	I1028 11:11:44.672724  150723 out.go:235]   - Configuring RBAC rules ...
	I1028 11:11:44.672861  150723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:11:44.681325  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:11:44.701467  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:11:44.720481  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:11:44.731591  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:11:44.743611  150723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:11:44.968060  150723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:11:45.411017  150723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:11:45.970736  150723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:11:45.970791  150723 kubeadm.go:310] 
	I1028 11:11:45.970885  150723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:11:45.970911  150723 kubeadm.go:310] 
	I1028 11:11:45.971033  150723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:11:45.971045  150723 kubeadm.go:310] 
	I1028 11:11:45.971081  150723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:11:45.971155  150723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:11:45.971234  150723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:11:45.971246  150723 kubeadm.go:310] 
	I1028 11:11:45.971327  150723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:11:45.971346  150723 kubeadm.go:310] 
	I1028 11:11:45.971421  150723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:11:45.971432  150723 kubeadm.go:310] 
	I1028 11:11:45.971526  150723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:11:45.971668  150723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:11:45.971782  150723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:11:45.971802  150723 kubeadm.go:310] 
	I1028 11:11:45.971912  150723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:11:45.972050  150723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:11:45.972078  150723 kubeadm.go:310] 
	I1028 11:11:45.972201  150723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972360  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 11:11:45.972397  150723 kubeadm.go:310] 	--control-plane 
	I1028 11:11:45.972407  150723 kubeadm.go:310] 
	I1028 11:11:45.972546  150723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:11:45.972563  150723 kubeadm.go:310] 
	I1028 11:11:45.972685  150723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972831  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 11:11:45.973046  150723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:11:45.973098  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:45.973115  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:45.975136  150723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:11:45.976845  150723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:11:45.982665  150723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:11:45.982687  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:11:46.004414  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:11:46.391016  150723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:11:46.391108  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.391153  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358 minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=true
	I1028 11:11:46.556219  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.556239  150723 ops.go:34] apiserver oom_adj: -16
	I1028 11:11:47.056803  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:47.556401  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.057031  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.556648  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.056531  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.556278  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.056341  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.557096  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.688176  150723 kubeadm.go:1113] duration metric: took 4.297146148s to wait for elevateKubeSystemPrivileges
	I1028 11:11:50.688219  150723 kubeadm.go:394] duration metric: took 15.794958001s to StartCluster
	I1028 11:11:50.688240  150723 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.688317  150723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.689020  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.689264  150723 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:50.689283  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:11:50.689310  150723 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:11:50.689399  150723 addons.go:69] Setting storage-provisioner=true in profile "ha-928358"
	I1028 11:11:50.689294  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:11:50.689432  150723 addons.go:69] Setting default-storageclass=true in profile "ha-928358"
	I1028 11:11:50.689434  150723 addons.go:234] Setting addon storage-provisioner=true in "ha-928358"
	I1028 11:11:50.689444  150723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-928358"
	I1028 11:11:50.689473  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.689502  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:50.689978  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690024  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.690030  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690078  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.705787  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I1028 11:11:50.705799  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1028 11:11:50.706396  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706425  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706943  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.706961  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707116  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.707141  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707344  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707538  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707605  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.708242  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.708286  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.709865  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.710123  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:11:50.710718  150723 addons.go:234] Setting addon default-storageclass=true in "ha-928358"
	I1028 11:11:50.710749  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.710982  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.711007  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.711160  150723 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:11:50.724777  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I1028 11:11:50.725295  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.725751  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I1028 11:11:50.725906  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.725930  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.726287  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.726327  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.726526  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.726809  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.726831  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.727169  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.727730  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.727777  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.728384  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.730334  150723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:11:50.731788  150723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:50.731810  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:11:50.731829  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.735112  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735661  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.735681  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735902  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.736091  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.736234  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.736386  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.743829  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I1028 11:11:50.744355  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.744925  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.744949  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.745276  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.745461  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.747144  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.747358  150723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.747374  150723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:11:50.747388  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.749934  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750358  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.750397  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750503  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.750676  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.750813  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.750942  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.872575  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:11:50.921646  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.984303  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:51.311574  150723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:11:51.359517  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.359546  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.359929  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.359938  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.359978  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.359992  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.360011  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.360266  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.360332  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.360347  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.360405  150723 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:11:51.360435  150723 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:11:51.360539  150723 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:11:51.360552  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.360564  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.360580  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.370574  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:11:51.371224  150723 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:11:51.371242  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.371253  150723 round_trippers.go:473]     Content-Type: application/json
	I1028 11:11:51.371260  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.371264  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.378842  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:11:51.379088  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.379107  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.379391  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.379407  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.723667  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.723697  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724015  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724061  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.724071  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.724078  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724024  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.724319  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724335  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.726167  150723 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 11:11:51.727603  150723 addons.go:510] duration metric: took 1.038296123s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 11:11:51.727646  150723 start.go:246] waiting for cluster config update ...
	I1028 11:11:51.727661  150723 start.go:255] writing updated cluster config ...
	I1028 11:11:51.729506  150723 out.go:201] 
	I1028 11:11:51.731166  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:51.731233  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.732989  150723 out.go:177] * Starting "ha-928358-m02" control-plane node in "ha-928358" cluster
	I1028 11:11:51.734422  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:51.734443  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:11:51.734539  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:11:51.734550  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:11:51.734619  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.734790  150723 start.go:360] acquireMachinesLock for ha-928358-m02: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:11:51.734834  150723 start.go:364] duration metric: took 28.788µs to acquireMachinesLock for "ha-928358-m02"
	I1028 11:11:51.734851  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:51.734918  150723 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:11:51.736531  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:11:51.736608  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:51.736641  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:51.751347  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I1028 11:11:51.751714  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:51.752299  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:51.752328  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:51.752603  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:51.752792  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:11:51.752934  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:11:51.753123  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:11:51.753174  150723 client.go:168] LocalClient.Create starting
	I1028 11:11:51.753215  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:11:51.753263  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753289  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753362  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:11:51.753389  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753404  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753437  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:11:51.753449  150723 main.go:141] libmachine: (ha-928358-m02) Calling .PreCreateCheck
	I1028 11:11:51.753595  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:11:51.754006  150723 main.go:141] libmachine: Creating machine...
	I1028 11:11:51.754022  150723 main.go:141] libmachine: (ha-928358-m02) Calling .Create
	I1028 11:11:51.754205  150723 main.go:141] libmachine: (ha-928358-m02) Creating KVM machine...
	I1028 11:11:51.755415  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing default KVM network
	I1028 11:11:51.755582  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing private KVM network mk-ha-928358
	I1028 11:11:51.755707  150723 main.go:141] libmachine: (ha-928358-m02) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:51.755730  150723 main.go:141] libmachine: (ha-928358-m02) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:11:51.755821  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.755707  151103 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:51.755971  150723 main.go:141] libmachine: (ha-928358-m02) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:11:51.993174  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.993039  151103 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa...
	I1028 11:11:52.383008  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.382864  151103 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk...
	I1028 11:11:52.383053  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing magic tar header
	I1028 11:11:52.383094  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing SSH key tar header
	I1028 11:11:52.383117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.383029  151103 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:52.383167  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02
	I1028 11:11:52.383203  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:52.383214  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 (perms=drwx------)
	I1028 11:11:52.383224  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:52.383237  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:52.383258  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:52.383272  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:52.383295  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:52.383304  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:52.383313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:52.383324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:52.383332  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home
	I1028 11:11:52.383343  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Skipping /home - not owner
	I1028 11:11:52.383370  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:52.383390  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:52.384348  150723 main.go:141] libmachine: (ha-928358-m02) define libvirt domain using xml: 
	I1028 11:11:52.384373  150723 main.go:141] libmachine: (ha-928358-m02) <domain type='kvm'>
	I1028 11:11:52.384400  150723 main.go:141] libmachine: (ha-928358-m02)   <name>ha-928358-m02</name>
	I1028 11:11:52.384412  150723 main.go:141] libmachine: (ha-928358-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:11:52.384426  150723 main.go:141] libmachine: (ha-928358-m02)   <vcpu>2</vcpu>
	I1028 11:11:52.384436  150723 main.go:141] libmachine: (ha-928358-m02)   <features>
	I1028 11:11:52.384457  150723 main.go:141] libmachine: (ha-928358-m02)     <acpi/>
	I1028 11:11:52.384472  150723 main.go:141] libmachine: (ha-928358-m02)     <apic/>
	I1028 11:11:52.384478  150723 main.go:141] libmachine: (ha-928358-m02)     <pae/>
	I1028 11:11:52.384482  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384490  150723 main.go:141] libmachine: (ha-928358-m02)   </features>
	I1028 11:11:52.384494  150723 main.go:141] libmachine: (ha-928358-m02)   <cpu mode='host-passthrough'>
	I1028 11:11:52.384501  150723 main.go:141] libmachine: (ha-928358-m02)   
	I1028 11:11:52.384506  150723 main.go:141] libmachine: (ha-928358-m02)   </cpu>
	I1028 11:11:52.384511  150723 main.go:141] libmachine: (ha-928358-m02)   <os>
	I1028 11:11:52.384516  150723 main.go:141] libmachine: (ha-928358-m02)     <type>hvm</type>
	I1028 11:11:52.384522  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='cdrom'/>
	I1028 11:11:52.384526  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='hd'/>
	I1028 11:11:52.384531  150723 main.go:141] libmachine: (ha-928358-m02)     <bootmenu enable='no'/>
	I1028 11:11:52.384537  150723 main.go:141] libmachine: (ha-928358-m02)   </os>
	I1028 11:11:52.384561  150723 main.go:141] libmachine: (ha-928358-m02)   <devices>
	I1028 11:11:52.384580  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='cdrom'>
	I1028 11:11:52.384598  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/boot2docker.iso'/>
	I1028 11:11:52.384615  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:52.384624  150723 main.go:141] libmachine: (ha-928358-m02)       <readonly/>
	I1028 11:11:52.384628  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384634  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='disk'>
	I1028 11:11:52.384642  150723 main.go:141] libmachine: (ha-928358-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:52.384650  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk'/>
	I1028 11:11:52.384657  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:11:52.384661  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384668  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384674  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='mk-ha-928358'/>
	I1028 11:11:52.384681  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384688  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384692  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384698  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='default'/>
	I1028 11:11:52.384703  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384708  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384713  150723 main.go:141] libmachine: (ha-928358-m02)     <serial type='pty'>
	I1028 11:11:52.384742  150723 main.go:141] libmachine: (ha-928358-m02)       <target port='0'/>
	I1028 11:11:52.384769  150723 main.go:141] libmachine: (ha-928358-m02)     </serial>
	I1028 11:11:52.384791  150723 main.go:141] libmachine: (ha-928358-m02)     <console type='pty'>
	I1028 11:11:52.384814  150723 main.go:141] libmachine: (ha-928358-m02)       <target type='serial' port='0'/>
	I1028 11:11:52.384828  150723 main.go:141] libmachine: (ha-928358-m02)     </console>
	I1028 11:11:52.384840  150723 main.go:141] libmachine: (ha-928358-m02)     <rng model='virtio'>
	I1028 11:11:52.384852  150723 main.go:141] libmachine: (ha-928358-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:11:52.384859  150723 main.go:141] libmachine: (ha-928358-m02)     </rng>
	I1028 11:11:52.384865  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384887  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384900  150723 main.go:141] libmachine: (ha-928358-m02)   </devices>
	I1028 11:11:52.384910  150723 main.go:141] libmachine: (ha-928358-m02) </domain>
	I1028 11:11:52.384921  150723 main.go:141] libmachine: (ha-928358-m02) 
	I1028 11:11:52.391941  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:67:49 in network default
	I1028 11:11:52.392560  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring networks are active...
	I1028 11:11:52.392579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:52.393436  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network default is active
	I1028 11:11:52.393821  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network mk-ha-928358 is active
	I1028 11:11:52.394171  150723 main.go:141] libmachine: (ha-928358-m02) Getting domain xml...
	I1028 11:11:52.394853  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:53.630024  150723 main.go:141] libmachine: (ha-928358-m02) Waiting to get IP...
	I1028 11:11:53.630962  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.631449  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.631495  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.631430  151103 retry.go:31] will retry after 231.171985ms: waiting for machine to come up
	I1028 11:11:53.864111  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.864512  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.864546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.864499  151103 retry.go:31] will retry after 296.507043ms: waiting for machine to come up
	I1028 11:11:54.163050  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.163543  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.163593  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.163496  151103 retry.go:31] will retry after 357.855811ms: waiting for machine to come up
	I1028 11:11:54.523089  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.523546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.523575  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.523481  151103 retry.go:31] will retry after 569.003787ms: waiting for machine to come up
	I1028 11:11:55.094333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.094770  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.094795  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.094741  151103 retry.go:31] will retry after 495.310626ms: waiting for machine to come up
	I1028 11:11:55.591480  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.592037  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.592065  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.591984  151103 retry.go:31] will retry after 697.027358ms: waiting for machine to come up
	I1028 11:11:56.291011  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:56.291427  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:56.291455  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:56.291390  151103 retry.go:31] will retry after 819.98241ms: waiting for machine to come up
	I1028 11:11:57.112476  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:57.112920  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:57.112950  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:57.112861  151103 retry.go:31] will retry after 1.468451423s: waiting for machine to come up
	I1028 11:11:58.582633  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:58.583095  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:58.583117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:58.583044  151103 retry.go:31] will retry after 1.732332827s: waiting for machine to come up
	I1028 11:12:00.316579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:00.316974  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:00.317005  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:00.316915  151103 retry.go:31] will retry after 1.701246598s: waiting for machine to come up
	I1028 11:12:02.020279  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:02.020762  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:02.020780  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:02.020732  151103 retry.go:31] will retry after 2.239954262s: waiting for machine to come up
	I1028 11:12:04.262705  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:04.263103  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:04.263134  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:04.263076  151103 retry.go:31] will retry after 3.584543805s: waiting for machine to come up
	I1028 11:12:07.848824  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:07.849223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:07.849246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:07.849186  151103 retry.go:31] will retry after 4.083747812s: waiting for machine to come up
	I1028 11:12:11.934986  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:11.935519  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:11.935541  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:11.935464  151103 retry.go:31] will retry after 5.450262186s: waiting for machine to come up
	I1028 11:12:17.387598  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388014  150723 main.go:141] libmachine: (ha-928358-m02) Found IP for machine: 192.168.39.15
	I1028 11:12:17.388040  150723 main.go:141] libmachine: (ha-928358-m02) Reserving static IP address...
	I1028 11:12:17.388061  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has current primary IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388484  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find host DHCP lease matching {name: "ha-928358-m02", mac: "52:54:00:6f:70:28", ip: "192.168.39.15"} in network mk-ha-928358
	I1028 11:12:17.468628  150723 main.go:141] libmachine: (ha-928358-m02) Reserved static IP address: 192.168.39.15
	I1028 11:12:17.468659  150723 main.go:141] libmachine: (ha-928358-m02) Waiting for SSH to be available...
	I1028 11:12:17.468668  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Getting to WaitForSSH function...
	I1028 11:12:17.471501  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472007  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.472034  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472218  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH client type: external
	I1028 11:12:17.472251  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa (-rw-------)
	I1028 11:12:17.472281  150723 main.go:141] libmachine: (ha-928358-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:12:17.472296  150723 main.go:141] libmachine: (ha-928358-m02) DBG | About to run SSH command:
	I1028 11:12:17.472313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | exit 0
	I1028 11:12:17.602076  150723 main.go:141] libmachine: (ha-928358-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:12:17.602372  150723 main.go:141] libmachine: (ha-928358-m02) KVM machine creation complete!
	I1028 11:12:17.602744  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:17.603321  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603533  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603697  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:12:17.603728  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetState
	I1028 11:12:17.605258  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:12:17.605275  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:12:17.605282  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:12:17.605291  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.607333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607701  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.607721  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607912  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.608143  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608313  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608439  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.608583  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.608808  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.608820  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:12:17.721307  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:17.721336  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:12:17.721347  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.724798  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725194  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.725223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725409  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.725636  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725966  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.726099  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.726262  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.726279  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:12:17.838473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:12:17.838586  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:12:17.838602  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:12:17.838613  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.838892  150723 buildroot.go:166] provisioning hostname "ha-928358-m02"
	I1028 11:12:17.838917  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.839093  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.841883  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842317  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.842339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842472  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.842669  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842831  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842971  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.843156  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.843326  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.843338  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m02 && echo "ha-928358-m02" | sudo tee /etc/hostname
	I1028 11:12:17.968498  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m02
	
	I1028 11:12:17.968528  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.971246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971623  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.971653  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.971988  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972158  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972315  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.972474  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.972671  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.972693  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:12:18.095026  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:18.095079  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:12:18.095099  150723 buildroot.go:174] setting up certificates
	I1028 11:12:18.095111  150723 provision.go:84] configureAuth start
	I1028 11:12:18.095125  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:18.095406  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.098183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098549  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.098574  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098726  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.100797  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.101209  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101422  150723 provision.go:143] copyHostCerts
	I1028 11:12:18.101450  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101483  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:12:18.101493  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101585  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:12:18.101707  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101736  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:12:18.101747  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101792  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:12:18.101860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101880  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:12:18.101884  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101906  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:12:18.101972  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m02 san=[127.0.0.1 192.168.39.15 ha-928358-m02 localhost minikube]
	I1028 11:12:18.196094  150723 provision.go:177] copyRemoteCerts
	I1028 11:12:18.196152  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:12:18.196173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.198995  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199315  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.199339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199521  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.199709  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.199854  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.199983  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.288841  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:12:18.288936  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:12:18.314840  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:12:18.314910  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:12:18.341393  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:12:18.341485  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:12:18.366854  150723 provision.go:87] duration metric: took 271.722974ms to configureAuth
	I1028 11:12:18.366893  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:12:18.367124  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:18.367212  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.370267  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.370639  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370796  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.371029  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371307  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.371456  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.371620  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.371634  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:12:18.612895  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:12:18.612923  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:12:18.612931  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetURL
	I1028 11:12:18.614354  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using libvirt version 6000000
	I1028 11:12:18.616667  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617056  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.617087  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617192  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:12:18.617204  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:12:18.617212  150723 client.go:171] duration metric: took 26.86402649s to LocalClient.Create
	I1028 11:12:18.617234  150723 start.go:167] duration metric: took 26.864111247s to libmachine.API.Create "ha-928358"
	I1028 11:12:18.617248  150723 start.go:293] postStartSetup for "ha-928358-m02" (driver="kvm2")
	I1028 11:12:18.617264  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:12:18.617289  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.617583  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:12:18.617614  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.619991  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620293  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.620324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620465  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.620632  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.620807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.620947  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.709453  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:12:18.714006  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:12:18.714050  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:12:18.714135  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:12:18.714212  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:12:18.714223  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:12:18.714317  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:12:18.725069  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:18.750381  150723 start.go:296] duration metric: took 133.112799ms for postStartSetup
	I1028 11:12:18.750443  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:18.751083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.753465  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.753830  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.753860  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.754104  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:12:18.754302  150723 start.go:128] duration metric: took 27.019366662s to createHost
	I1028 11:12:18.754324  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.756274  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756584  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.756606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756746  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.756928  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757211  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.757395  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.757617  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.757632  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:12:18.870465  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113938.848702185
	
	I1028 11:12:18.870492  150723 fix.go:216] guest clock: 1730113938.848702185
	I1028 11:12:18.870502  150723 fix.go:229] Guest: 2024-10-28 11:12:18.848702185 +0000 UTC Remote: 2024-10-28 11:12:18.754313813 +0000 UTC m=+79.331053022 (delta=94.388372ms)
	I1028 11:12:18.870523  150723 fix.go:200] guest clock delta is within tolerance: 94.388372ms
	I1028 11:12:18.870530  150723 start.go:83] releasing machines lock for "ha-928358-m02", held for 27.135687063s
	I1028 11:12:18.870557  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.870818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.873499  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.873921  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.873952  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.876354  150723 out.go:177] * Found network options:
	I1028 11:12:18.877803  150723 out.go:177]   - NO_PROXY=192.168.39.206
	W1028 11:12:18.879297  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.879332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.879863  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880042  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880145  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:12:18.880199  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	W1028 11:12:18.880223  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.880307  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:12:18.880332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.882741  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883009  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883032  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883152  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883178  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883365  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883531  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.883570  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883597  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883673  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.883773  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883886  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883979  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.884097  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:19.140607  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:12:19.146803  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:12:19.146880  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:12:19.163725  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:12:19.163760  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:12:19.163823  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:12:19.180717  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:12:19.195299  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:12:19.195367  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:12:19.209555  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:12:19.223597  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:12:19.345039  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:12:19.505186  150723 docker.go:233] disabling docker service ...
	I1028 11:12:19.505264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:12:19.520570  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:12:19.534795  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:12:19.656005  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:12:19.777835  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:12:19.793076  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:12:19.813202  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:12:19.813275  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.824795  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:12:19.824878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.836376  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.847788  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.858444  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:12:19.869710  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.880881  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.900116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.910944  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:12:19.921199  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:12:19.921284  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:12:19.936681  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:12:19.954317  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:20.080754  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:12:20.180414  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:12:20.180503  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:12:20.185906  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:12:20.185979  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:12:20.190133  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:12:20.233553  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:12:20.233626  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.262764  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.298972  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:12:20.300478  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:12:20.301810  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:20.304361  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304709  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:20.304731  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304901  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:12:20.309556  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:20.323672  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:12:20.323882  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:20.324235  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.324287  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.339013  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I1028 11:12:20.339463  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.340030  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.340052  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.340399  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.340615  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:12:20.342314  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:20.342631  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.342680  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.357539  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I1028 11:12:20.358002  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.358498  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.358519  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.359008  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.359212  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:20.359422  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.15
	I1028 11:12:20.359434  150723 certs.go:194] generating shared ca certs ...
	I1028 11:12:20.359450  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.359573  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:12:20.359614  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:12:20.359623  150723 certs.go:256] generating profile certs ...
	I1028 11:12:20.359689  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:12:20.359712  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94
	I1028 11:12:20.359727  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.254]
	I1028 11:12:20.442903  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 ...
	I1028 11:12:20.442934  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94: {Name:mk85a4e1a50b9026ab3d6dc4495b321bb7e02ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443115  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 ...
	I1028 11:12:20.443128  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94: {Name:mk7f773e25633de1a7b22c2c20b13ade22c5f211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443202  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:12:20.443334  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:12:20.443463  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:12:20.443480  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:12:20.443493  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:12:20.443506  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:12:20.443519  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:12:20.443535  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:12:20.443547  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:12:20.443559  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:12:20.443571  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:12:20.443620  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:12:20.443647  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:12:20.443657  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:12:20.443683  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:12:20.443705  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:12:20.443728  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:12:20.443767  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:20.443793  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:20.443806  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:12:20.443820  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:12:20.443852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:20.446971  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447376  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:20.447407  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447537  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:20.447754  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:20.447909  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:20.448040  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:20.533935  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:12:20.540194  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:12:20.553555  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:12:20.558471  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:12:20.571472  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:12:20.576267  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:12:20.588003  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:12:20.593338  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:12:20.605038  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:12:20.609724  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:12:20.623742  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:12:20.628679  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:12:20.640341  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:12:20.667017  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:12:20.692744  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:12:20.718588  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:12:20.748034  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:12:20.775373  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:12:20.802947  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:12:20.831097  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:12:20.858123  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:12:20.882703  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:12:20.907628  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:12:20.933325  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:12:20.951380  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:12:20.970398  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:12:20.988118  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:12:21.006403  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:12:21.027746  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:12:21.046174  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:12:21.066465  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:12:21.072838  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:12:21.086541  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091618  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091672  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.098303  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:12:21.110328  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:12:21.122629  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127701  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127772  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.134271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:12:21.146879  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:12:21.159782  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165113  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165173  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.171693  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:12:21.183939  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:12:21.188218  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:12:21.188285  150723 kubeadm.go:934] updating node {m02 192.168.39.15 8443 v1.31.2 crio true true} ...
	I1028 11:12:21.188380  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:12:21.188402  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:12:21.188440  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:12:21.207772  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:12:21.207836  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:12:21.207903  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.219161  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:12:21.219233  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.229788  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:12:21.229822  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229868  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:12:21.229883  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229901  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:12:21.234643  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:12:21.234682  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:12:22.169217  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.169290  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.175155  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:12:22.175187  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:12:22.612156  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:12:22.630404  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.630517  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.635637  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:12:22.635690  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:12:22.984793  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:12:22.995829  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:12:23.014631  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:12:23.033132  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:12:23.051694  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:12:23.056057  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:23.069704  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:23.193632  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:23.213616  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:23.214094  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:23.214154  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:23.229467  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I1028 11:12:23.229946  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:23.230470  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:23.230493  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:23.230811  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:23.231005  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:23.231156  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:12:23.231250  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:12:23.231265  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:23.234605  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235105  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:23.235130  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235484  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:23.235658  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:23.235817  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:23.235978  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:23.587402  150723 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:23.587450  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443"
	I1028 11:12:49.062311  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443": (25.474831461s)
	I1028 11:12:49.062358  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:12:49.750628  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m02 minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:12:49.901989  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:12:50.021163  150723 start.go:319] duration metric: took 26.789999674s to joinCluster
	I1028 11:12:50.021261  150723 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:50.021588  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:50.022686  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:12:50.024027  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:50.259666  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:50.294975  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:12:50.295261  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:12:50.295325  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:12:50.295539  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:12:50.295634  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.295644  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.295655  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.295661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.311123  150723 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1028 11:12:50.796718  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.796750  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.796761  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.796767  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.800704  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:51.296741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.296771  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.296783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.296789  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.301317  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:51.796429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.796461  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.796472  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.796479  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.902786  150723 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I1028 11:12:52.295866  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.295889  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.295896  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.295902  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.299707  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:52.300296  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:52.796802  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.796836  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.796848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.796854  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.801105  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:53.296430  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.296464  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.296476  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.296482  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.300401  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:53.796454  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.796475  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.796483  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.796487  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.800686  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:54.296632  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.296658  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.296669  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.296675  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.430413  150723 round_trippers.go:574] Response Status: 200 OK in 133 milliseconds
	I1028 11:12:54.431260  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:54.796228  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.796251  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.796260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.796297  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.799743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:55.295741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.295769  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.295779  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.295784  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.300264  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:55.796141  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.796166  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.796177  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.796183  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.799984  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.296002  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.296025  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.296033  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.296038  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.299236  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.796285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.796327  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.796338  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.796343  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.801079  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:56.801722  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:57.295973  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.296010  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.296019  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.296022  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.300070  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:57.796110  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.796138  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.796156  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.800286  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:58.296657  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.296684  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.296694  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.296700  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.300601  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:58.795760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.795783  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.795791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.795795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.799253  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.296427  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.296448  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.296457  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.296461  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.300112  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.300577  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:59.795852  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.795874  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.795882  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.795886  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.799187  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.296355  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.296376  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.296385  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.296388  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.300090  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.796212  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.796241  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.796250  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.796255  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.799643  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.296675  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.296698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.296706  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.296720  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.300506  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.300981  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:01.795747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.795781  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.795793  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.795800  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.799384  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.296561  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.296587  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.296595  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.296601  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.300227  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.796111  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.796139  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.796175  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.799502  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.295908  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.295932  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.295940  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.295944  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.299608  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.796579  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.796602  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.796611  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.796615  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.801307  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:03.802803  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:04.296022  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.296047  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.296055  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.296058  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.300556  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:04.796471  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.796494  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.796502  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.796507  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.801460  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:05.296387  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.296409  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.296417  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.296422  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.299743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:05.796148  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.796171  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.796179  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.796184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.801488  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:06.296441  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.296475  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.296487  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.296492  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.300636  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:06.301140  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:06.796015  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.796054  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.796067  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.796073  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.802178  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:07.295805  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.295832  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.295841  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.295845  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.300831  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:07.796368  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.796395  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.796407  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.796413  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.800287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.295819  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.295846  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.295856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.295862  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.303573  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:13:08.304813  150723 node_ready.go:49] node "ha-928358-m02" has status "Ready":"True"
	I1028 11:13:08.304842  150723 node_ready.go:38] duration metric: took 18.009284836s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:13:08.304855  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:08.304964  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:08.304977  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.304986  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.304996  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.314253  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:13:08.322556  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.322661  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:13:08.322674  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.322686  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.322694  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.325598  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.326235  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.326251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.326262  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.326267  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.329653  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.330306  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.330330  150723 pod_ready.go:82] duration metric: took 7.745243ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330344  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330420  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:13:08.330431  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.330443  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.330451  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.333854  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.334683  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.334698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.334709  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.334717  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.338575  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.339125  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.339151  150723 pod_ready.go:82] duration metric: took 8.79493ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339166  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339239  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:13:08.339251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.339260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.339266  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.342147  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.342887  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.342903  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.342914  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.342919  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.345586  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.346017  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.346037  150723 pod_ready.go:82] duration metric: took 6.859007ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346049  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346126  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:13:08.346136  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.346149  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.346155  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.349837  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.350760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.350776  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.350783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.350787  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.354111  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.354776  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.354797  150723 pod_ready.go:82] duration metric: took 8.74104ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.354818  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.496252  150723 request.go:632] Waited for 141.345028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496320  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.496333  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.496338  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.500168  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.696151  150723 request.go:632] Waited for 195.353851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696219  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696228  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.696240  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.696249  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.700151  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.701139  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.701160  150723 pod_ready.go:82] duration metric: took 346.331354ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.701174  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.896292  150723 request.go:632] Waited for 195.012978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896361  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896371  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.896387  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.896396  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.900050  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.096401  150723 request.go:632] Waited for 195.396634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096481  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.096489  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.096493  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.100986  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:09.101422  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.101442  150723 pod_ready.go:82] duration metric: took 400.258829ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.101456  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.296560  150723 request.go:632] Waited for 195.02851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296638  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296643  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.296654  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.296672  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.300596  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.496746  150723 request.go:632] Waited for 195.271102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496832  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496844  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.496856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.496863  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.500375  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.501182  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.501208  150723 pod_ready.go:82] duration metric: took 399.742852ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.501223  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.696672  150723 request.go:632] Waited for 195.364831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696753  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.696761  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.696765  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.700353  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.896500  150723 request.go:632] Waited for 195.402622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896557  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896562  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.896570  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.896574  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.899876  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.900586  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.900606  150723 pod_ready.go:82] duration metric: took 399.370555ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.900621  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.096828  150723 request.go:632] Waited for 196.099526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096889  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096895  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.096902  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.096907  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.100607  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.295935  150723 request.go:632] Waited for 194.296247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296028  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296036  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.296047  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.296052  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.299514  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.299992  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.300013  150723 pod_ready.go:82] duration metric: took 399.384578ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.300033  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.496260  150723 request.go:632] Waited for 196.135494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496330  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496339  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.496347  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.496352  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.500702  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:10.696747  150723 request.go:632] Waited for 195.398969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696828  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696834  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.696842  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.696849  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.700510  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.701486  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.701505  150723 pod_ready.go:82] duration metric: took 401.465094ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.701515  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.896720  150723 request.go:632] Waited for 195.109133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896777  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896783  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.896790  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.896795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.900315  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.096400  150723 request.go:632] Waited for 195.36981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096478  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096483  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.096493  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.096499  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.100065  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.100566  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.100590  150723 pod_ready.go:82] duration metric: took 399.065558ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.100600  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.296785  150723 request.go:632] Waited for 196.108788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296873  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296881  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.296891  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.296896  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.300760  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.495907  150723 request.go:632] Waited for 194.292764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.495994  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.496001  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.496011  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.496021  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.500420  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.500960  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.500979  150723 pod_ready.go:82] duration metric: took 400.371324ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.500991  150723 pod_ready.go:39] duration metric: took 3.196117998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:11.501012  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:13:11.501071  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:13:11.518775  150723 api_server.go:72] duration metric: took 21.497464525s to wait for apiserver process to appear ...
	I1028 11:13:11.518811  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:13:11.518839  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:13:11.523103  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:13:11.523168  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:13:11.523173  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.523180  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.523189  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.524064  150723 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:13:11.524163  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:13:11.524189  150723 api_server.go:131] duration metric: took 5.370992ms to wait for apiserver health ...
	I1028 11:13:11.524197  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:13:11.696656  150723 request.go:632] Waited for 172.384226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696727  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696733  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.696740  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.696744  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.702489  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:11.707749  150723 system_pods.go:59] 17 kube-system pods found
	I1028 11:13:11.707791  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:11.707798  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:11.707802  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:11.707805  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:11.707808  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:11.707812  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:11.707815  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:11.707818  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:11.707821  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:11.707824  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:11.707828  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:11.707831  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:11.707833  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:11.707837  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:11.707840  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:11.707843  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:11.707847  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:11.707852  150723 system_pods.go:74] duration metric: took 183.650264ms to wait for pod list to return data ...
	I1028 11:13:11.707863  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:13:11.895935  150723 request.go:632] Waited for 187.997842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895992  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895997  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.896004  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.896009  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.900031  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.900269  150723 default_sa.go:45] found service account: "default"
	I1028 11:13:11.900286  150723 default_sa.go:55] duration metric: took 192.416558ms for default service account to be created ...
	I1028 11:13:11.900298  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:13:12.096570  150723 request.go:632] Waited for 196.184771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096668  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096678  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.096690  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.096703  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.102990  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:13:12.107971  150723 system_pods.go:86] 17 kube-system pods found
	I1028 11:13:12.108008  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:12.108017  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:12.108022  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:12.108027  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:12.108032  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:12.108037  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:12.108044  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:12.108051  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:12.108056  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:12.108062  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:12.108067  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:12.108072  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:12.108076  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:12.108082  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:12.108088  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:12.108094  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:12.108101  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:12.108116  150723 system_pods.go:126] duration metric: took 207.810112ms to wait for k8s-apps to be running ...
	I1028 11:13:12.108138  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:13:12.108196  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:12.125765  150723 system_svc.go:56] duration metric: took 17.59726ms WaitForService to wait for kubelet
	I1028 11:13:12.125805  150723 kubeadm.go:582] duration metric: took 22.104503497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:13:12.125835  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:13:12.296271  150723 request.go:632] Waited for 170.346607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296352  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296358  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.296365  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.296370  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.301322  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:12.302235  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302261  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302297  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302303  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302310  150723 node_conditions.go:105] duration metric: took 176.469824ms to run NodePressure ...
	I1028 11:13:12.302331  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:13:12.302371  150723 start.go:255] writing updated cluster config ...
	I1028 11:13:12.304722  150723 out.go:201] 
	I1028 11:13:12.306493  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:12.306595  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.308496  150723 out.go:177] * Starting "ha-928358-m03" control-plane node in "ha-928358" cluster
	I1028 11:13:12.310210  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:13:12.310234  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:13:12.310336  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:13:12.310347  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:13:12.310430  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.310601  150723 start.go:360] acquireMachinesLock for ha-928358-m03: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:13:12.310642  150723 start.go:364] duration metric: took 22.061µs to acquireMachinesLock for "ha-928358-m03"
	I1028 11:13:12.310662  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:12.310748  150723 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:13:12.312443  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:13:12.312555  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:12.312596  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:12.327768  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I1028 11:13:12.328249  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:12.328745  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:12.328765  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:12.329102  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:12.329311  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:12.329448  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:12.329611  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:13:12.329642  150723 client.go:168] LocalClient.Create starting
	I1028 11:13:12.329670  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:13:12.329703  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329720  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329768  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:13:12.329788  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329799  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329815  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:13:12.329826  150723 main.go:141] libmachine: (ha-928358-m03) Calling .PreCreateCheck
	I1028 11:13:12.329995  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:12.330372  150723 main.go:141] libmachine: Creating machine...
	I1028 11:13:12.330386  150723 main.go:141] libmachine: (ha-928358-m03) Calling .Create
	I1028 11:13:12.330528  150723 main.go:141] libmachine: (ha-928358-m03) Creating KVM machine...
	I1028 11:13:12.331834  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing default KVM network
	I1028 11:13:12.332000  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing private KVM network mk-ha-928358
	I1028 11:13:12.332124  150723 main.go:141] libmachine: (ha-928358-m03) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.332140  150723 main.go:141] libmachine: (ha-928358-m03) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:13:12.332221  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.332127  151534 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.332333  150723 main.go:141] libmachine: (ha-928358-m03) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:13:12.597391  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.597227  151534 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa...
	I1028 11:13:12.699922  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699777  151534 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk...
	I1028 11:13:12.699960  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing magic tar header
	I1028 11:13:12.699975  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing SSH key tar header
	I1028 11:13:12.699986  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699933  151534 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.700170  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 (perms=drwx------)
	I1028 11:13:12.700205  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:13:12.700218  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03
	I1028 11:13:12.700232  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:13:12.700244  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:13:12.700258  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:13:12.700271  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.700287  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:13:12.700300  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:13:12.700313  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:13:12.700325  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:13:12.700339  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:12.700363  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:13:12.700371  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home
	I1028 11:13:12.700395  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Skipping /home - not owner
	I1028 11:13:12.701297  150723 main.go:141] libmachine: (ha-928358-m03) define libvirt domain using xml: 
	I1028 11:13:12.701328  150723 main.go:141] libmachine: (ha-928358-m03) <domain type='kvm'>
	I1028 11:13:12.701339  150723 main.go:141] libmachine: (ha-928358-m03)   <name>ha-928358-m03</name>
	I1028 11:13:12.701346  150723 main.go:141] libmachine: (ha-928358-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:13:12.701358  150723 main.go:141] libmachine: (ha-928358-m03)   <vcpu>2</vcpu>
	I1028 11:13:12.701364  150723 main.go:141] libmachine: (ha-928358-m03)   <features>
	I1028 11:13:12.701373  150723 main.go:141] libmachine: (ha-928358-m03)     <acpi/>
	I1028 11:13:12.701383  150723 main.go:141] libmachine: (ha-928358-m03)     <apic/>
	I1028 11:13:12.701391  150723 main.go:141] libmachine: (ha-928358-m03)     <pae/>
	I1028 11:13:12.701404  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701415  150723 main.go:141] libmachine: (ha-928358-m03)   </features>
	I1028 11:13:12.701423  150723 main.go:141] libmachine: (ha-928358-m03)   <cpu mode='host-passthrough'>
	I1028 11:13:12.701433  150723 main.go:141] libmachine: (ha-928358-m03)   
	I1028 11:13:12.701445  150723 main.go:141] libmachine: (ha-928358-m03)   </cpu>
	I1028 11:13:12.701456  150723 main.go:141] libmachine: (ha-928358-m03)   <os>
	I1028 11:13:12.701463  150723 main.go:141] libmachine: (ha-928358-m03)     <type>hvm</type>
	I1028 11:13:12.701472  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='cdrom'/>
	I1028 11:13:12.701478  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='hd'/>
	I1028 11:13:12.701513  150723 main.go:141] libmachine: (ha-928358-m03)     <bootmenu enable='no'/>
	I1028 11:13:12.701555  150723 main.go:141] libmachine: (ha-928358-m03)   </os>
	I1028 11:13:12.701565  150723 main.go:141] libmachine: (ha-928358-m03)   <devices>
	I1028 11:13:12.701573  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='cdrom'>
	I1028 11:13:12.701585  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/boot2docker.iso'/>
	I1028 11:13:12.701593  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:13:12.701600  150723 main.go:141] libmachine: (ha-928358-m03)       <readonly/>
	I1028 11:13:12.701607  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701622  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='disk'>
	I1028 11:13:12.701635  150723 main.go:141] libmachine: (ha-928358-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:13:12.701651  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk'/>
	I1028 11:13:12.701662  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:13:12.701673  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701683  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701717  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='mk-ha-928358'/>
	I1028 11:13:12.701741  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701754  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701765  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701776  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='default'/>
	I1028 11:13:12.701787  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701800  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701809  150723 main.go:141] libmachine: (ha-928358-m03)     <serial type='pty'>
	I1028 11:13:12.701821  150723 main.go:141] libmachine: (ha-928358-m03)       <target port='0'/>
	I1028 11:13:12.701833  150723 main.go:141] libmachine: (ha-928358-m03)     </serial>
	I1028 11:13:12.701844  150723 main.go:141] libmachine: (ha-928358-m03)     <console type='pty'>
	I1028 11:13:12.701855  150723 main.go:141] libmachine: (ha-928358-m03)       <target type='serial' port='0'/>
	I1028 11:13:12.701866  150723 main.go:141] libmachine: (ha-928358-m03)     </console>
	I1028 11:13:12.701874  150723 main.go:141] libmachine: (ha-928358-m03)     <rng model='virtio'>
	I1028 11:13:12.701883  150723 main.go:141] libmachine: (ha-928358-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:13:12.701898  150723 main.go:141] libmachine: (ha-928358-m03)     </rng>
	I1028 11:13:12.701909  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701917  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701927  150723 main.go:141] libmachine: (ha-928358-m03)   </devices>
	I1028 11:13:12.701935  150723 main.go:141] libmachine: (ha-928358-m03) </domain>
	I1028 11:13:12.701944  150723 main.go:141] libmachine: (ha-928358-m03) 
	I1028 11:13:12.709093  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:b5:fb:00 in network default
	I1028 11:13:12.709827  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring networks are active...
	I1028 11:13:12.709849  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:12.710555  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network default is active
	I1028 11:13:12.710786  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network mk-ha-928358 is active
	I1028 11:13:12.711115  150723 main.go:141] libmachine: (ha-928358-m03) Getting domain xml...
	I1028 11:13:12.711807  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:13.995752  150723 main.go:141] libmachine: (ha-928358-m03) Waiting to get IP...
	I1028 11:13:13.996563  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:13.997045  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:13.997085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:13.997018  151534 retry.go:31] will retry after 234.151571ms: waiting for machine to come up
	I1028 11:13:14.232519  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.233064  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.233096  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.232999  151534 retry.go:31] will retry after 249.582339ms: waiting for machine to come up
	I1028 11:13:14.484383  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.484878  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.484915  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.484812  151534 retry.go:31] will retry after 409.553215ms: waiting for machine to come up
	I1028 11:13:14.896380  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.896855  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.896887  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.896797  151534 retry.go:31] will retry after 412.085621ms: waiting for machine to come up
	I1028 11:13:15.310086  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.310769  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.310799  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.310719  151534 retry.go:31] will retry after 651.315136ms: waiting for machine to come up
	I1028 11:13:15.963589  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.964049  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.964078  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.963990  151534 retry.go:31] will retry after 936.522294ms: waiting for machine to come up
	I1028 11:13:16.902173  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:16.902668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:16.902689  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:16.902618  151534 retry.go:31] will retry after 774.455135ms: waiting for machine to come up
	I1028 11:13:17.679023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:17.679574  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:17.679600  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:17.679540  151534 retry.go:31] will retry after 1.069131352s: waiting for machine to come up
	I1028 11:13:18.750780  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:18.751352  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:18.751375  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:18.751284  151534 retry.go:31] will retry after 1.587573663s: waiting for machine to come up
	I1028 11:13:20.340206  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:20.340612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:20.340643  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:20.340566  151534 retry.go:31] will retry after 1.424108777s: waiting for machine to come up
	I1028 11:13:21.766872  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:21.767376  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:21.767397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:21.767337  151534 retry.go:31] will retry after 1.867673803s: waiting for machine to come up
	I1028 11:13:23.637608  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:23.638075  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:23.638103  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:23.638049  151534 retry.go:31] will retry after 3.385284423s: waiting for machine to come up
	I1028 11:13:27.027812  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:27.028397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:27.028423  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:27.028342  151534 retry.go:31] will retry after 4.143137357s: waiting for machine to come up
	I1028 11:13:31.174612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:31.174990  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:31.175020  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:31.174951  151534 retry.go:31] will retry after 3.870983412s: waiting for machine to come up
	I1028 11:13:35.049044  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has current primary IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049716  150723 main.go:141] libmachine: (ha-928358-m03) Found IP for machine: 192.168.39.44
	I1028 11:13:35.049734  150723 main.go:141] libmachine: (ha-928358-m03) Reserving static IP address...
	I1028 11:13:35.050296  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find host DHCP lease matching {name: "ha-928358-m03", mac: "52:54:00:7e:d3:f9", ip: "192.168.39.44"} in network mk-ha-928358
	I1028 11:13:35.126256  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Getting to WaitForSSH function...
	I1028 11:13:35.126303  150723 main.go:141] libmachine: (ha-928358-m03) Reserved static IP address: 192.168.39.44
	I1028 11:13:35.126318  150723 main.go:141] libmachine: (ha-928358-m03) Waiting for SSH to be available...
	I1028 11:13:35.128851  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129272  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.129315  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129446  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH client type: external
	I1028 11:13:35.129476  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa (-rw-------)
	I1028 11:13:35.129507  150723 main.go:141] libmachine: (ha-928358-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:13:35.129520  150723 main.go:141] libmachine: (ha-928358-m03) DBG | About to run SSH command:
	I1028 11:13:35.129564  150723 main.go:141] libmachine: (ha-928358-m03) DBG | exit 0
	I1028 11:13:35.253921  150723 main.go:141] libmachine: (ha-928358-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:13:35.254211  150723 main.go:141] libmachine: (ha-928358-m03) KVM machine creation complete!
	I1028 11:13:35.254512  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:35.255052  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255255  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255399  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:13:35.255411  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetState
	I1028 11:13:35.256908  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:13:35.256921  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:13:35.256927  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:13:35.256932  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.259735  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260211  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.260237  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260436  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.260625  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260784  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260899  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.261057  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.261307  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.261321  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:13:35.360859  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.360890  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:13:35.360902  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.364454  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.364848  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.364904  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.365213  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.365431  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365607  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365742  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.365932  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.366116  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.366130  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:13:35.470987  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:13:35.471094  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:13:35.471109  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:13:35.471120  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471399  150723 buildroot.go:166] provisioning hostname "ha-928358-m03"
	I1028 11:13:35.471424  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471622  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.474085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474509  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.474542  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474681  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.474871  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475021  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475156  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.475305  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.475494  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.475510  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m03 && echo "ha-928358-m03" | sudo tee /etc/hostname
	I1028 11:13:35.593400  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m03
	
	I1028 11:13:35.593429  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.596415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596740  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.596767  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596962  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.597183  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597361  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597490  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.597704  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.597875  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.597892  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:13:35.715751  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.715791  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:13:35.715811  150723 buildroot.go:174] setting up certificates
	I1028 11:13:35.715821  150723 provision.go:84] configureAuth start
	I1028 11:13:35.715834  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.716106  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:35.718868  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719187  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.719219  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719354  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.721477  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721760  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.721790  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721917  150723 provision.go:143] copyHostCerts
	I1028 11:13:35.721979  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722032  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:13:35.722044  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722140  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:13:35.722245  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722278  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:13:35.722289  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722332  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:13:35.722402  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722429  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:13:35.722435  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722459  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:13:35.722531  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m03 san=[127.0.0.1 192.168.39.44 ha-928358-m03 localhost minikube]
	I1028 11:13:35.825404  150723 provision.go:177] copyRemoteCerts
	I1028 11:13:35.825459  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:13:35.825483  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.828415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.828803  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828972  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.829151  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.829337  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.829485  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:35.913472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:13:35.913575  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:13:35.940828  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:13:35.940904  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:13:35.968009  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:13:35.968078  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:13:35.997592  150723 provision.go:87] duration metric: took 281.755193ms to configureAuth
	I1028 11:13:35.997618  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:13:35.997801  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:35.997869  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.000450  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.000935  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.000970  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.001165  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.001385  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001734  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.001893  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.002062  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.002076  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:13:36.221329  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:13:36.221364  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:13:36.221433  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetURL
	I1028 11:13:36.222571  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using libvirt version 6000000
	I1028 11:13:36.224781  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225156  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.225179  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225329  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:13:36.225344  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:13:36.225353  150723 client.go:171] duration metric: took 23.895703285s to LocalClient.Create
	I1028 11:13:36.225379  150723 start.go:167] duration metric: took 23.895771231s to libmachine.API.Create "ha-928358"
	I1028 11:13:36.225390  150723 start.go:293] postStartSetup for "ha-928358-m03" (driver="kvm2")
	I1028 11:13:36.225399  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:13:36.225413  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.225669  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:13:36.225696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.227681  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.227995  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.228023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.228147  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.228314  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.228474  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.228601  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.313594  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:13:36.318443  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:13:36.318477  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:13:36.318544  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:13:36.318614  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:13:36.318624  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:13:36.318705  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:13:36.330227  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:36.357995  150723 start.go:296] duration metric: took 132.588764ms for postStartSetup
	I1028 11:13:36.358059  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:36.358728  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.361773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362238  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.362267  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362589  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:36.362828  150723 start.go:128] duration metric: took 24.052057424s to createHost
	I1028 11:13:36.362855  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.365684  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.365985  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.366016  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.366211  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.366426  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.366842  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.367055  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.367079  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:13:36.470814  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114016.442636655
	
	I1028 11:13:36.470843  150723 fix.go:216] guest clock: 1730114016.442636655
	I1028 11:13:36.470853  150723 fix.go:229] Guest: 2024-10-28 11:13:36.442636655 +0000 UTC Remote: 2024-10-28 11:13:36.362843133 +0000 UTC m=+156.939582341 (delta=79.793522ms)
	I1028 11:13:36.470869  150723 fix.go:200] guest clock delta is within tolerance: 79.793522ms
	I1028 11:13:36.470874  150723 start.go:83] releasing machines lock for "ha-928358-m03", held for 24.160222671s
	I1028 11:13:36.470894  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.471174  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.473802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.474314  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.474345  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.476703  150723 out.go:177] * Found network options:
	I1028 11:13:36.478253  150723 out.go:177]   - NO_PROXY=192.168.39.206,192.168.39.15
	W1028 11:13:36.479492  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.479516  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.479532  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480171  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480372  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480474  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:13:36.480516  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	W1028 11:13:36.480627  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.480648  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.480710  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:13:36.480733  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.483390  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483597  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.483836  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483976  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484137  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484152  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.484171  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.484240  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484323  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484392  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.484441  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484542  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484643  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.722609  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:13:36.728895  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:13:36.728959  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:13:36.746783  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:13:36.746814  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:13:36.746889  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:13:36.764176  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:13:36.780539  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:13:36.780611  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:13:36.795323  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:13:36.811733  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:13:36.943649  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:13:37.116480  150723 docker.go:233] disabling docker service ...
	I1028 11:13:37.116541  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:13:37.131848  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:13:37.146207  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:13:37.271760  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:13:37.397315  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:13:37.413150  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:13:37.433193  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:13:37.433274  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.448784  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:13:37.448861  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.461820  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.474878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.487273  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:13:37.500384  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.513109  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.533296  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.546472  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:13:37.557495  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:13:37.557598  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:13:37.573136  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:13:37.584661  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:37.701023  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:13:37.798120  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:13:37.798207  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:13:37.803954  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:13:37.804021  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:13:37.808938  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:13:37.851814  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:13:37.851905  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.881347  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.916129  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:13:37.917503  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:13:37.918841  150723 out.go:177]   - env NO_PROXY=192.168.39.206,192.168.39.15
	I1028 11:13:37.920060  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:37.923080  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923530  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:37.923560  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923801  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:13:37.928489  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:37.944276  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:13:37.944540  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:37.944876  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.944917  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.960868  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I1028 11:13:37.961448  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.961978  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.962000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.962320  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.962554  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:13:37.964176  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:37.964500  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.964546  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.980099  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I1028 11:13:37.980536  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.980994  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.981027  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.981316  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.981476  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:37.981636  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.44
	I1028 11:13:37.981649  150723 certs.go:194] generating shared ca certs ...
	I1028 11:13:37.981667  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:37.981815  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:13:37.981867  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:13:37.981880  150723 certs.go:256] generating profile certs ...
	I1028 11:13:37.981981  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:13:37.982024  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408
	I1028 11:13:37.982045  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.44 192.168.39.254]
	I1028 11:13:38.031818  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 ...
	I1028 11:13:38.031849  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408: {Name:mk24630c498d89b32162095507c0812c854412bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032046  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 ...
	I1028 11:13:38.032062  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408: {Name:mk38f2fd390923bb1dfc386b88fc31f22cbd1405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032164  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:13:38.032326  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:13:38.032501  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:13:38.032524  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:13:38.032548  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:13:38.032568  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:13:38.032585  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:13:38.032605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:13:38.032622  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:13:38.032641  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:13:38.045605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:13:38.045699  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:13:38.045758  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:13:38.045774  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:13:38.045809  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:13:38.045836  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:13:38.045857  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:13:38.045912  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:38.045950  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.045974  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.045992  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.046044  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:38.049011  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049464  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:38.049485  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049679  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:38.049889  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:38.050031  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:38.050163  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:38.129875  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:13:38.135272  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:13:38.146812  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:13:38.151195  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:13:38.162579  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:13:38.167018  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:13:38.178835  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:13:38.183162  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:13:38.195172  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:13:38.199929  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:13:38.212017  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:13:38.216559  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:13:38.228337  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:13:38.256831  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:13:38.282349  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:13:38.312381  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:13:38.340368  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:13:38.368852  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:13:38.396585  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:13:38.425195  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:13:38.453101  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:13:38.479115  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:13:38.505463  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:13:38.531445  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:13:38.550676  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:13:38.570134  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:13:38.588413  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:13:38.606756  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:13:38.626726  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:13:38.646275  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:13:38.665976  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:13:38.672176  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:13:38.685017  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690136  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690209  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.697711  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:13:38.712239  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:13:38.725832  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730869  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730941  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.737271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:13:38.751047  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:13:38.763980  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769518  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769615  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.776609  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:13:38.791196  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:13:38.796201  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:13:38.796261  150723 kubeadm.go:934] updating node {m03 192.168.39.44 8443 v1.31.2 crio true true} ...
	I1028 11:13:38.796362  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:13:38.796397  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:13:38.796470  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:13:38.817160  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:13:38.817224  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:13:38.817279  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.829712  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:13:38.829765  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.842596  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:13:38.842645  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:13:38.842708  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842755  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:13:38.842821  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.842886  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.849835  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:13:38.849867  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:13:38.850062  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:13:38.850096  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:13:38.869860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:38.870019  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:39.008547  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:13:39.008597  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:13:39.841044  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:13:39.851424  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:13:39.870537  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:13:39.890208  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:13:39.908650  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:13:39.913130  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:39.926430  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:40.057322  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:13:40.076284  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:40.076669  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:40.076716  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:40.094065  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I1028 11:13:40.094505  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:40.095080  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:40.095109  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:40.095526  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:40.095722  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:40.095896  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:13:40.096063  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:13:40.096090  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:40.099282  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.099834  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:40.099865  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.100013  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:40.100216  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:40.100410  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:40.100563  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:40.273359  150723 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:40.273397  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443"
	I1028 11:14:04.540358  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443": (24.266932187s)
	I1028 11:14:04.540403  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:14:05.110298  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m03 minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:14:05.258236  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:14:05.400029  150723 start.go:319] duration metric: took 25.304126551s to joinCluster
	I1028 11:14:05.400118  150723 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:14:05.400571  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:14:05.401586  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:14:05.403593  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:14:05.647217  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:14:05.664862  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:14:05.665098  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:14:05.665166  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:14:05.665399  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:05.665469  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:05.665476  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:05.665484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:05.665490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:05.669744  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.165968  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.165997  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.166009  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.166016  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.170123  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.666317  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.666416  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.666445  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.666462  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.670843  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:07.165728  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.165755  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.165768  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.165776  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.169304  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.666123  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.666154  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.666165  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.666171  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.669713  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.670892  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:08.166009  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.166031  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.166039  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.166043  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.169692  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:08.666389  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.666423  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.666436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.666446  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.671535  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:09.166494  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.166518  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.166530  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.166537  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.170858  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.665722  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.665745  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.665753  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.665762  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.670170  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.671084  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:10.165695  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.165724  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.165735  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.165742  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.173147  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:10.666401  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.666429  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.666440  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.666443  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.671830  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:11.165701  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.165722  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.165731  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.165737  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.228148  150723 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I1028 11:14:11.666333  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.666388  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.666401  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.666408  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.670186  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:11.671264  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:12.165684  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.165709  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.165715  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.165719  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.170052  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:12.666466  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.666494  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.666504  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.666509  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.670352  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:13.166382  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.166410  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.166421  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.166427  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.171235  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:13.666623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.666647  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.666656  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.666661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.670621  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.165740  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.165767  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.165783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.169178  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.170214  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:14.666184  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.666206  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.666215  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.666219  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.670466  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:15.166232  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.166261  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.166272  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.166276  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.173444  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:15.666306  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.666344  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.666348  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.670385  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:16.166429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.166461  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.166474  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.166481  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.170181  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:16.170699  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:16.665698  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.665723  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.665730  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.665734  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.669776  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:17.165640  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.165664  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.165672  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.165676  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.169368  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:17.666177  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.666202  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.666210  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.666214  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.670134  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.165917  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.165940  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.165948  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.165952  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.169496  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.665925  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.665949  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.665971  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.665976  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.669433  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.670970  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:19.165694  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.165718  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.165728  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.165732  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.170437  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:19.666095  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.666123  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.666134  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.666141  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.668970  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:20.166291  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.166314  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.166322  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.166326  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.170016  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:20.665789  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.665815  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.665822  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.665827  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.669287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.165826  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.165853  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.165862  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.165868  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.169651  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.170332  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:21.665771  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.665804  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.665816  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.665822  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.669841  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:22.166380  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.166406  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.166414  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.166420  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.169816  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:22.666341  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.666364  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.666372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.666377  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.670923  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.165737  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.165762  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.165771  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.169299  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.665765  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.665789  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.665797  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.665801  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.669697  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.670619  150723 node_ready.go:49] node "ha-928358-m03" has status "Ready":"True"
	I1028 11:14:23.670643  150723 node_ready.go:38] duration metric: took 18.005227415s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:23.670662  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:23.670813  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:23.670845  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.670858  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.670875  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.677257  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:23.683895  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.683990  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:14:23.683999  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.684007  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.684011  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688327  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.688931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.688948  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.688956  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688960  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.691787  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.692523  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.692543  150723 pod_ready.go:82] duration metric: took 8.61912ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692554  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692624  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:14:23.692632  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.692639  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.692645  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.695738  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.696515  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.696533  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.696542  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.696548  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.699472  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.700068  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.700097  150723 pod_ready.go:82] duration metric: took 7.535535ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700107  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700162  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:14:23.700171  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.700178  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.700184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.702917  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.703534  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.703550  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.703559  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.703566  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.706103  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.706650  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.706674  150723 pod_ready.go:82] duration metric: took 6.560031ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706686  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706758  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:14:23.706768  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.706778  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.706785  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.709373  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.710451  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:23.710472  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.710484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.710490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.713376  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.713980  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.714010  150723 pod_ready.go:82] duration metric: took 7.313443ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.714024  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.866359  150723 request.go:632] Waited for 152.224049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866492  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.866504  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.866516  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.871166  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.066273  150723 request.go:632] Waited for 194.358951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066350  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066361  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.066372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.066378  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.070313  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.071003  150723 pod_ready.go:93] pod "etcd-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.071021  150723 pod_ready.go:82] duration metric: took 356.990267ms for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.071039  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.266224  150723 request.go:632] Waited for 195.110039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266290  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.266298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.266303  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.271102  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.466777  150723 request.go:632] Waited for 195.051662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466835  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466840  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.466848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.466857  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.471602  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.472438  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.472458  150723 pod_ready.go:82] duration metric: took 401.411661ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.472468  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.666245  150723 request.go:632] Waited for 193.688569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666321  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.666332  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.666337  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.670192  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.866165  150723 request.go:632] Waited for 195.218003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866225  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866230  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.866237  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.866242  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.869696  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.870520  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.870539  150723 pod_ready.go:82] duration metric: took 398.065091ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.870549  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.066723  150723 request.go:632] Waited for 196.090526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066790  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066796  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.066812  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.066818  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.070840  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:25.266492  150723 request.go:632] Waited for 194.408437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266550  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266555  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.266563  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.266567  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.270440  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.271647  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.271668  150723 pod_ready.go:82] duration metric: took 401.112731ms for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.271677  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.466686  150723 request.go:632] Waited for 194.942796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466776  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466782  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.466791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.466799  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.478807  150723 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:14:25.666227  150723 request.go:632] Waited for 186.359371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666322  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.666346  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.666355  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.669950  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.670691  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.670710  150723 pod_ready.go:82] duration metric: took 399.026254ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.670723  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.866724  150723 request.go:632] Waited for 195.936368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866801  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866807  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.866814  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.866819  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.870640  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.065827  150723 request.go:632] Waited for 194.310294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065907  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065912  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.065920  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.065925  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.069699  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.070459  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.070478  150723 pod_ready.go:82] duration metric: took 399.749253ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.070489  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.266701  150723 request.go:632] Waited for 196.138179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266792  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266809  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.266825  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.266832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.270679  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.466081  150723 request.go:632] Waited for 194.361983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466174  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466182  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.466194  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.466206  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.470252  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:26.470784  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.470804  150723 pod_ready.go:82] duration metric: took 400.309396ms for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.470815  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.665844  150723 request.go:632] Waited for 194.95975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665902  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665925  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.665956  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.665963  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.669385  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.866618  150723 request.go:632] Waited for 196.393847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866674  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866679  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.866687  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.866690  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.870012  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.870701  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.870720  150723 pod_ready.go:82] duration metric: took 399.898606ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.870734  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.065775  150723 request.go:632] Waited for 194.965869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065845  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065850  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.065858  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.065865  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.069945  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.266078  150723 request.go:632] Waited for 195.378208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266154  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266159  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.266167  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.266174  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.269961  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:27.270605  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.270625  150723 pod_ready.go:82] duration metric: took 399.882701ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.270640  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.466435  150723 request.go:632] Waited for 195.719587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466503  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466511  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.466550  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.466562  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.473780  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:27.666214  150723 request.go:632] Waited for 191.347069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666284  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666291  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.666298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.666302  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.670820  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.671554  150723 pod_ready.go:93] pod "kube-proxy-np8x5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.671578  150723 pod_ready.go:82] duration metric: took 400.929643ms for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.671589  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.866741  150723 request.go:632] Waited for 195.08002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866814  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866821  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.866832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.866843  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.870682  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.066337  150723 request.go:632] Waited for 194.812157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066403  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066408  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.066416  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.066420  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.069743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.070462  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.070483  150723 pod_ready.go:82] duration metric: took 398.887712ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.070497  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.265961  150723 request.go:632] Waited for 195.392733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266039  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266047  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.266057  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.266088  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.269740  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.465851  150723 request.go:632] Waited for 195.318291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465937  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.465949  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.465957  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.470812  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:28.471696  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.471720  150723 pod_ready.go:82] duration metric: took 401.210524ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.471733  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.665763  150723 request.go:632] Waited for 193.940561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665854  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665869  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.665877  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.665883  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.669746  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.866768  150723 request.go:632] Waited for 196.382736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866827  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866832  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.866840  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.866844  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.870665  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.871107  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.871125  150723 pod_ready.go:82] duration metric: took 399.382061ms for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.871136  150723 pod_ready.go:39] duration metric: took 5.200463354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:28.871154  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:14:28.871205  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:14:28.894991  150723 api_server.go:72] duration metric: took 23.494825881s to wait for apiserver process to appear ...
	I1028 11:14:28.895029  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:14:28.895053  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:14:28.901769  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:14:28.901850  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:14:28.901857  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.901868  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.901879  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.903049  150723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:14:28.903133  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:14:28.903153  150723 api_server.go:131] duration metric: took 8.11544ms to wait for apiserver health ...
	I1028 11:14:28.903164  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:14:29.066557  150723 request.go:632] Waited for 163.310035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066628  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.066650  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.066657  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.073405  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.079996  150723 system_pods.go:59] 24 kube-system pods found
	I1028 11:14:29.080029  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.080039  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.080043  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.080047  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.080050  150723 system_pods.go:61] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.080053  150723 system_pods.go:61] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.080056  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.080062  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.080065  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.080068  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.080071  150723 system_pods.go:61] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.080075  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.080079  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.080085  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.080089  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.080094  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.080099  150723 system_pods.go:61] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.080103  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.080109  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.080117  150723 system_pods.go:61] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.080124  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.080135  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.080139  150723 system_pods.go:61] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.080142  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.080148  150723 system_pods.go:74] duration metric: took 176.977613ms to wait for pod list to return data ...
	I1028 11:14:29.080159  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:14:29.266599  150723 request.go:632] Waited for 186.363794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266653  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266658  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.266665  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.266669  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.271060  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.271213  150723 default_sa.go:45] found service account: "default"
	I1028 11:14:29.271235  150723 default_sa.go:55] duration metric: took 191.069027ms for default service account to be created ...
	I1028 11:14:29.271247  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:14:29.466315  150723 request.go:632] Waited for 194.981882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466408  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466421  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.466436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.466448  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.472918  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.481266  150723 system_pods.go:86] 24 kube-system pods found
	I1028 11:14:29.481302  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.481308  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.481312  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.481316  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.481320  150723 system_pods.go:89] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.481324  150723 system_pods.go:89] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.481327  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.481330  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.481333  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.481336  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.481339  150723 system_pods.go:89] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.481343  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.481346  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.481350  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.481354  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.481359  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.481362  150723 system_pods.go:89] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.481364  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.481368  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.481372  150723 system_pods.go:89] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.481378  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.481382  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.481388  150723 system_pods.go:89] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.481392  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.481402  150723 system_pods.go:126] duration metric: took 210.146699ms to wait for k8s-apps to be running ...
	I1028 11:14:29.481415  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:14:29.481478  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:14:29.499294  150723 system_svc.go:56] duration metric: took 17.867458ms WaitForService to wait for kubelet
	I1028 11:14:29.499345  150723 kubeadm.go:582] duration metric: took 24.099188581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:14:29.499369  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:14:29.666183  150723 request.go:632] Waited for 166.698659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666244  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666250  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.666258  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.666262  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.670701  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.671840  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671859  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671869  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671873  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671877  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671880  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671883  150723 node_conditions.go:105] duration metric: took 172.509467ms to run NodePressure ...
	I1028 11:14:29.671895  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:14:29.671914  150723 start.go:255] writing updated cluster config ...
	I1028 11:14:29.672186  150723 ssh_runner.go:195] Run: rm -f paused
	I1028 11:14:29.727881  150723 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:14:29.729936  150723 out.go:177] * Done! kubectl is now configured to use "ha-928358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.240163733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d91d2c70-c24f-4070-8962-a9cabf1c52f9 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.245359620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6160d5de-9376-4516-bb60-f96451641788 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.245853307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305245816424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6160d5de-9376-4516-bb60-f96451641788 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.247808233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae00d99b-06fa-4621-93e8-e7d70e841748 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.247871288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae00d99b-06fa-4621-93e8-e7d70e841748 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.248190727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae00d99b-06fa-4621-93e8-e7d70e841748 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.296145391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20c6d79b-658f-47ec-bd81-496908e321fa name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.296224242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20c6d79b-658f-47ec-bd81-496908e321fa name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.297536723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e0824b9-49b0-413a-bfdb-88d69a46f2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.297964266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305297943876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e0824b9-49b0-413a-bfdb-88d69a46f2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.298711878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a45574a5-6774-4c1d-bbf5-f4e48c5f91d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.298767635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a45574a5-6774-4c1d-bbf5-f4e48c5f91d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.299094567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a45574a5-6774-4c1d-bbf5-f4e48c5f91d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.331323770Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d59c42ea-b76f-4879-86fa-4ed996b30788 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.331810574Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-dnw8z,Uid:9c810197-a557-46ef-b357-7e291a4a7b89,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730114071346550352,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:14:30.733782207Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:84b302cf-9f88-4a96-aa61-c2ca6512e060,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1730113923125936665,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-28T11:12:02.807323181Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xxxgw,Uid:6a07f06b-45fb-48df-a2a2-11a778f673f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113923125059570,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a07f06b-45fb-48df-a2a2-11a778f673f9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:12:02.805561191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gnm9r,Uid:a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1730113923103629359,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:12:02.797315413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&PodSandboxMetadata{Name:kindnet-pq9gp,Uid:2ea8de0e-a664-4adb-aec2-6f98508540c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113910415147879,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:11:50.106439100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&PodSandboxMetadata{Name:kube-proxy-8fxdn,Uid:7b2e1e84-6129-4868-b46b-525da3cdf687,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113910405770392,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T11:11:50.090649853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&PodSandboxMetadata{Name:etcd-ha-928358,Uid:6c6aafad1b68cb8667c9a27dc935b2f4,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1730113898901764232,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.206:2379,kubernetes.io/config.hash: 6c6aafad1b68cb8667c9a27dc935b2f4,kubernetes.io/config.seen: 2024-10-28T11:11:38.384829455Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-928358,Uid:5ad239d10939bdcd9fa6b3f4d3a18685,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898894235076,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d109
39bdcd9fa6b3f4d3a18685,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.206:8443,kubernetes.io/config.hash: 5ad239d10939bdcd9fa6b3f4d3a18685,kubernetes.io/config.seen: 2024-10-28T11:11:38.384833611Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-928358,Uid:65f0454183202822eaaf9dce289e7ab0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898890390090,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{kubernetes.io/config.hash: 65f0454183202822eaaf9dce289e7ab0,kubernetes.io/config.seen: 2024-10-28T11:11:38.384910523Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2efa4330e0881e7fbc78
ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-928358,Uid:bf3ddb9faad874d83f5a9c68c563fb6b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898884624977,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bf3ddb9faad874d83f5a9c68c563fb6b,kubernetes.io/config.seen: 2024-10-28T11:11:38.384907467Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-928358,Uid:66d5e9725d6fffac64bd660c7f6042f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1730113898864719968,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 66d5e9725d6fffac64bd660c7f6042f6,kubernetes.io/config.seen: 2024-10-28T11:11:38.384909743Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d59c42ea-b76f-4879-86fa-4ed996b30788 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.333134888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f77e614-8972-48a2-bfca-476bf1513bd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.333255127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f77e614-8972-48a2-bfca-476bf1513bd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.333612928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f77e614-8972-48a2-bfca-476bf1513bd7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.346928428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d8ad03e-7fff-436e-9d18-223b7cf53c3f name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.347070697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d8ad03e-7fff-436e-9d18-223b7cf53c3f name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.349204350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5202df3-8ccd-44d1-bce2-a01a3270f933 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.350444600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305350416872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5202df3-8ccd-44d1-bce2-a01a3270f933 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.351110106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51fbc8e9-9ad1-45c2-9a74-2dabca9ee1d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.351185350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51fbc8e9-9ad1-45c2-9a74-2dabca9ee1d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:25 ha-928358 crio[664]: time="2024-10-28 11:18:25.351441821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51fbc8e9-9ad1-45c2-9a74-2dabca9ee1d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	678eb45e28d22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   6fcf4a6026d95       busybox-7dff88458-dnw8z
	267b822906895       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   554c79cdc22b7       coredns-7c65d6cfc9-gnm9r
	0ec81022134ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b55f959c9e26e       coredns-7c65d6cfc9-xxxgw
	101876df5ba49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   cc9b8c6075292       storage-provisioner
	93fda9ea564e1       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   af0a9858b9f50       kindnet-pq9gp
	6af78d85866c9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f07333184a007       kube-proxy-8fxdn
	b4500f47684e6       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   aef8ad820f733       kube-vip-ha-928358
	a75ab3d16aba2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   841e8a03bb9b3       etcd-ha-928358
	f8221151573cf       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   1975c249cdfee       kube-apiserver-ha-928358
	e735b7e201a7d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2efa4330e0881       kube-controller-manager-ha-928358
	1be8f3556358e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   041b17e002580       kube-scheduler-ha-928358
	
	
	==> coredns [0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962] <==
	[INFO] 10.244.2.2:54221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001644473s
	[INFO] 10.244.2.2:58493 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00055293s
	[INFO] 10.244.1.2:59466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000373197s
	[INFO] 10.244.1.2:59196 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002135371s
	[INFO] 10.244.0.4:48789 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140504s
	[INFO] 10.244.0.4:43613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168237s
	[INFO] 10.244.0.4:38143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.016935286s
	[INFO] 10.244.0.4:39110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177298s
	[INFO] 10.244.2.2:46780 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169863s
	[INFO] 10.244.2.2:56782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002009621s
	[INFO] 10.244.2.2:39525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138628s
	[INFO] 10.244.2.2:53832 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216458s
	[INFO] 10.244.1.2:39727 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226061s
	[INFO] 10.244.1.2:60944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001495416s
	[INFO] 10.244.1.2:36506 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119701s
	[INFO] 10.244.1.2:59657 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001674s
	[INFO] 10.244.0.4:50368 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178977s
	[INFO] 10.244.0.4:47562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089999s
	[INFO] 10.244.1.2:44983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013645s
	[INFO] 10.244.1.2:33581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164661s
	[INFO] 10.244.1.2:39245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099456s
	[INFO] 10.244.0.4:48286 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018935s
	[INFO] 10.244.0.4:33651 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163132s
	[INFO] 10.244.2.2:57361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144876s
	[INFO] 10.244.2.2:38124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021886s
	
	
	==> coredns [267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134] <==
	[INFO] 10.244.0.4:46197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168175s
	[INFO] 10.244.0.4:43404 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138086s
	[INFO] 10.244.2.2:42078 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211245s
	[INFO] 10.244.2.2:43818 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001478975s
	[INFO] 10.244.2.2:36869 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148567s
	[INFO] 10.244.2.2:38696 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110904s
	[INFO] 10.244.1.2:53013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000625096s
	[INFO] 10.244.1.2:57247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002184098s
	[INFO] 10.244.1.2:60298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097712s
	[INFO] 10.244.1.2:42104 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099517s
	[INFO] 10.244.0.4:43344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166235s
	[INFO] 10.244.0.4:39756 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110369s
	[INFO] 10.244.2.2:51568 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132969s
	[INFO] 10.244.2.2:39038 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106245s
	[INFO] 10.244.2.2:36223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090887s
	[INFO] 10.244.2.2:53817 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077711s
	[INFO] 10.244.1.2:45611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112879s
	[INFO] 10.244.0.4:48292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126001s
	[INFO] 10.244.0.4:49134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000314244s
	[INFO] 10.244.2.2:38137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166744s
	[INFO] 10.244.2.2:49391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218881s
	[INFO] 10.244.1.2:58619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152475s
	[INFO] 10.244.1.2:59879 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000283359s
	[INFO] 10.244.1.2:33696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103786s
	[INFO] 10.244.1.2:41150 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120227s
	
	
	==> describe nodes <==
	Name:               ha-928358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:12:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-928358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3063a9eb16b941929fe95ea9deb85942
	  System UUID:                3063a9eb-16b9-4192-9fe9-5ea9deb85942
	  Boot ID:                    4750ce27-a752-459c-82e1-f46d3ba9e4fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dnw8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 coredns-7c65d6cfc9-gnm9r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m35s
	  kube-system                 coredns-7c65d6cfc9-xxxgw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m35s
	  kube-system                 etcd-ha-928358                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m40s
	  kube-system                 kindnet-pq9gp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-apiserver-ha-928358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-controller-manager-ha-928358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-proxy-8fxdn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-scheduler-ha-928358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-vip-ha-928358                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m34s  kube-proxy       
	  Normal  Starting                 6m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m40s  kubelet          Node ha-928358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s  kubelet          Node ha-928358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s  kubelet          Node ha-928358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m36s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  NodeReady                6m23s  kubelet          Node ha-928358 status is now: NodeReady
	  Normal  RegisteredNode           5m30s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  RegisteredNode           4m15s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	
	
	Name:               ha-928358-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:12:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:15:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-928358-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb0972414207466c8358559557f25b09
	  System UUID:                fb097241-4207-466c-8358-559557f25b09
	  Boot ID:                    69b9f603-4134-42b4-a3f9-eeae845c3c91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx5tk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-928358-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m37s
	  kube-system                 kindnet-j4vj5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m39s
	  kube-system                 kube-apiserver-ha-928358-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-ha-928358-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-proxy-cfhp5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-ha-928358-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-vip-ha-928358-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m34s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m39s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m39s)  kubelet          Node ha-928358-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x7 over 5m39s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-928358-m02 status is now: NodeNotReady
	
	
	Name:               ha-928358-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-928358-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf69c3934784b66bc2bf05f458d71ba
	  System UUID:                ebf69c39-3478-4b66-bc2b-f05f458d71ba
	  Boot ID:                    2e5043ad-620d-4233-b866-677c45434de6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8ctp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-928358-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-9k2mz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m25s
	  kube-system                 kube-apiserver-ha-928358-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-928358-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-np8x5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-ha-928358-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-928358-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-928358-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	
	
	Name:               ha-928358-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_15_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-928358-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ee6c88b1c8c4fa2aebbfe4047465ead
	  System UUID:                6ee6c88b-1c8c-4fa2-aebb-fe4047465ead
	  Boot ID:                    b70ab214-29c9-4d90-9700-0ff1df9971f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k2ddr       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m16s
	  kube-system                 kube-proxy-fl4b7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m16s (x2 over 3m16s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m16s (x2 over 3m16s)  kubelet          Node ha-928358-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m16s (x2 over 3m16s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-928358-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053627] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.945749] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.924544] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.657378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.658005] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.063082] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059947] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.199848] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.133132] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.303491] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.303698] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.055659] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.938074] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +1.148998] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.072047] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087002] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.352589] kauditd_printk_skb: 21 callbacks suppressed
	[Oct28 11:12] kauditd_printk_skb: 38 callbacks suppressed
	[ +49.929447] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854] <==
	{"level":"warn","ts":"2024-10-28T11:18:25.652541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.657109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.670331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.677699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.684887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.689288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.692720Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.698307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.699934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.705210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.711674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.717947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.722087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.730554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.734404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.734632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.740332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.747877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.752144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.755958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.763698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.776439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.785383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.786203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:25.800219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:25 up 7 min,  0 users,  load average: 0.54, 0.52, 0.28
	Linux ha-928358 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a] <==
	I1028 11:17:52.310848       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:02.315389       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:02.315498       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:02.315666       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:02.315707       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:02.315812       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:02.315836       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:02.315914       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:02.315935       1 main.go:300] handling current node
	I1028 11:18:12.318153       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:12.318184       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:12.318402       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:12.318430       1 main.go:300] handling current node
	I1028 11:18:12.318441       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:12.318446       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:12.318605       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:12.318645       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:22.308838       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:22.308947       1 main.go:300] handling current node
	I1028 11:18:22.308976       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:22.309061       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:22.309300       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:22.309333       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:22.309462       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:22.309499       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52] <==
	I1028 11:11:44.249575       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1028 11:11:44.264324       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1028 11:11:44.266721       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 11:11:44.273696       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:11:44.441833       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:11:45.375393       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:11:45.401215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:11:45.422922       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:11:50.040543       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:11:50.160325       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:14:35.737044       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49680: use of closed network connection
	E1028 11:14:35.939412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49710: use of closed network connection
	E1028 11:14:36.137760       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49736: use of closed network connection
	E1028 11:14:36.353242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49742: use of closed network connection
	E1028 11:14:36.573304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49764: use of closed network connection
	E1028 11:14:36.795811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49780: use of closed network connection
	E1028 11:14:36.981176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49798: use of closed network connection
	E1028 11:14:37.177919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49830: use of closed network connection
	E1028 11:14:37.363976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49844: use of closed network connection
	E1028 11:14:37.667823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49884: use of closed network connection
	E1028 11:14:37.860879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49906: use of closed network connection
	E1028 11:14:38.044254       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49922: use of closed network connection
	E1028 11:14:38.230562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49930: use of closed network connection
	E1028 11:14:38.433175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49954: use of closed network connection
	E1028 11:14:38.620514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49974: use of closed network connection
	
	
	==> kube-controller-manager [e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef] <==
	I1028 11:15:02.129745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m03"
	E1028 11:15:09.422518       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8k978 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8k978\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1028 11:15:09.795491       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-928358-m04\" does not exist"
	I1028 11:15:09.833650       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-928358-m04" podCIDRs=["10.244.3.0/24"]
	I1028 11:15:09.833720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:09.833754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.048409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.186481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.510390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.501689       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-928358-m04"
	I1028 11:15:14.502311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.708709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:20.001285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.204169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:15:31.204768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.224821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:34.519983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:40.626763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:16:34.553439       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:16:34.556249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.585375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.698936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.004399ms"
	I1028 11:16:34.699212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.024µs"
	I1028 11:16:35.153194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:39.778629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	
	
	==> kube-proxy [6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:11:50.898284       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:11:50.922359       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E1028 11:11:50.922435       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:11:51.064127       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:11:51.064169       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:11:51.064206       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:11:51.084457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:11:51.088588       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:11:51.088608       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:11:51.098854       1 config.go:199] "Starting service config controller"
	I1028 11:11:51.099108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:11:51.099342       1 config.go:328] "Starting node config controller"
	I1028 11:11:51.099355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:11:51.122226       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:11:51.122243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:11:51.199431       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:11:51.199505       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:11:51.222697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583] <==
	W1028 11:11:43.540244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.540296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.541960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:11:43.542068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.589795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:11:43.589913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.666909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.667067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.681223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:11:43.681426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.721299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:11:43.721931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.811114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.811345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:11:46.351113       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:15:09.905243       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.908212       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1733f64f-2a73-414c-a048-b4ad6b9bd117(kube-system/kindnet-k2ddr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k2ddr"
	E1028 11:15:09.910352       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" pod="kube-system/kindnet-k2ddr"
	I1028 11:15:09.910453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.907070       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.910582       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 48c26642-8d42-43a1-ad06-ba9408499bf8(kube-system/kube-proxy-fl4b7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fl4b7"
	E1028 11:15:09.910623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-fl4b7"
	I1028 11:15:09.910661       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.930971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tswkg" node="ha-928358-m04"
	E1028 11:15:09.931171       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-tswkg"
	
	
	==> kubelet <==
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.514793    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.515166    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.516628    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.517193    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518657    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518678    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532318    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532805    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534490    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534569    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.349514    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536867    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536910    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539160    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539208    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540899    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540940    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:15 ha-928358 kubelet[1312]: E1028 11:18:15.543044    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114295542712895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:15 ha-928358 kubelet[1312]: E1028 11:18:15.543124    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114295542712895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:25 ha-928358 kubelet[1312]: E1028 11:18:25.544764    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305544540799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:25 ha-928358 kubelet[1312]: E1028 11:18:25.544789    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305544540799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-928358 -n ha-928358
helpers_test.go:261: (dbg) Run:  kubectl --context ha-928358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.93265217s)
ha_test.go:309: expected profile "ha-928358" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-928358\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-928358\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-928358\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.206\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.15\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.44\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.203\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"
MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-928358 -n ha-928358
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 logs -n 25: (1.542536401s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m03_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m04 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp testdata/cp-test.txt                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m03 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-928358 node stop m02 -v=7                                                    | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-928358 node start m02 -v=7                                                   | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:10:59
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:10:59.463321  150723 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:10:59.463437  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463447  150723 out.go:358] Setting ErrFile to fd 2...
	I1028 11:10:59.463453  150723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:59.463619  150723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:10:59.464198  150723 out.go:352] Setting JSON to false
	I1028 11:10:59.465062  150723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3202,"bootTime":1730110657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:10:59.465170  150723 start.go:139] virtualization: kvm guest
	I1028 11:10:59.467541  150723 out.go:177] * [ha-928358] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:10:59.469144  150723 notify.go:220] Checking for updates...
	I1028 11:10:59.469164  150723 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:10:59.470932  150723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:10:59.472579  150723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:10:59.474106  150723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.476022  150723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:10:59.477386  150723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:10:59.478873  150723 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:10:59.515106  150723 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 11:10:59.516643  150723 start.go:297] selected driver: kvm2
	I1028 11:10:59.516662  150723 start.go:901] validating driver "kvm2" against <nil>
	I1028 11:10:59.516677  150723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:10:59.517412  150723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.517509  150723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:10:59.533665  150723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:10:59.533714  150723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 11:10:59.533960  150723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:10:59.533991  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:10:59.534033  150723 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1028 11:10:59.534056  150723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1028 11:10:59.534109  150723 start.go:340] cluster config:
	{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1028 11:10:59.534204  150723 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:10:59.536334  150723 out.go:177] * Starting "ha-928358" primary control-plane node in "ha-928358" cluster
	I1028 11:10:59.537748  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:10:59.537794  150723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:10:59.537802  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:10:59.537881  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:10:59.537891  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:10:59.538184  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:10:59.538208  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json: {Name:mkb8dad6cb32a1c4cc26cae85e4e9234d9821c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:10:59.538374  150723 start.go:360] acquireMachinesLock for ha-928358: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:10:59.538406  150723 start.go:364] duration metric: took 16.963µs to acquireMachinesLock for "ha-928358"
	I1028 11:10:59.538425  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:10:59.538479  150723 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 11:10:59.540050  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:10:59.540188  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:59.540238  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:59.555032  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I1028 11:10:59.555455  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:59.555961  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:10:59.556000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:59.556420  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:59.556590  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:10:59.556764  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:10:59.556945  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:10:59.556977  150723 client.go:168] LocalClient.Create starting
	I1028 11:10:59.557015  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:10:59.557068  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557092  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557167  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:10:59.557195  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:10:59.557226  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:10:59.557253  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:10:59.557273  150723 main.go:141] libmachine: (ha-928358) Calling .PreCreateCheck
	I1028 11:10:59.557662  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:10:59.558063  150723 main.go:141] libmachine: Creating machine...
	I1028 11:10:59.558080  150723 main.go:141] libmachine: (ha-928358) Calling .Create
	I1028 11:10:59.558226  150723 main.go:141] libmachine: (ha-928358) Creating KVM machine...
	I1028 11:10:59.559811  150723 main.go:141] libmachine: (ha-928358) DBG | found existing default KVM network
	I1028 11:10:59.560481  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.560340  150746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I1028 11:10:59.560504  150723 main.go:141] libmachine: (ha-928358) DBG | created network xml: 
	I1028 11:10:59.560515  150723 main.go:141] libmachine: (ha-928358) DBG | <network>
	I1028 11:10:59.560521  150723 main.go:141] libmachine: (ha-928358) DBG |   <name>mk-ha-928358</name>
	I1028 11:10:59.560530  150723 main.go:141] libmachine: (ha-928358) DBG |   <dns enable='no'/>
	I1028 11:10:59.560536  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560547  150723 main.go:141] libmachine: (ha-928358) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1028 11:10:59.560555  150723 main.go:141] libmachine: (ha-928358) DBG |     <dhcp>
	I1028 11:10:59.560564  150723 main.go:141] libmachine: (ha-928358) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1028 11:10:59.560572  150723 main.go:141] libmachine: (ha-928358) DBG |     </dhcp>
	I1028 11:10:59.560581  150723 main.go:141] libmachine: (ha-928358) DBG |   </ip>
	I1028 11:10:59.560587  150723 main.go:141] libmachine: (ha-928358) DBG |   
	I1028 11:10:59.560595  150723 main.go:141] libmachine: (ha-928358) DBG | </network>
	I1028 11:10:59.560601  150723 main.go:141] libmachine: (ha-928358) DBG | 
	I1028 11:10:59.566260  150723 main.go:141] libmachine: (ha-928358) DBG | trying to create private KVM network mk-ha-928358 192.168.39.0/24...
	I1028 11:10:59.635650  150723 main.go:141] libmachine: (ha-928358) DBG | private KVM network mk-ha-928358 192.168.39.0/24 created
	I1028 11:10:59.635720  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.635608  150746 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:59.635745  150723 main.go:141] libmachine: (ha-928358) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.635835  150723 main.go:141] libmachine: (ha-928358) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:10:59.635904  150723 main.go:141] libmachine: (ha-928358) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:10:59.913193  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.913037  150746 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa...
	I1028 11:10:59.999912  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999757  150746 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk...
	I1028 11:10:59.999940  150723 main.go:141] libmachine: (ha-928358) DBG | Writing magic tar header
	I1028 11:10:59.999950  150723 main.go:141] libmachine: (ha-928358) DBG | Writing SSH key tar header
	I1028 11:10:59.999957  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:10:59.999874  150746 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 ...
	I1028 11:10:59.999966  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358
	I1028 11:11:00.000011  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358 (perms=drwx------)
	I1028 11:11:00.000025  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:00.000035  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:00.000055  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:00.000076  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:00.000090  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:00.000108  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:00.000117  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:00.000127  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:00.000138  150723 main.go:141] libmachine: (ha-928358) DBG | Checking permissions on dir: /home
	I1028 11:11:00.000147  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:00.000160  150723 main.go:141] libmachine: (ha-928358) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:00.000177  150723 main.go:141] libmachine: (ha-928358) DBG | Skipping /home - not owner
	I1028 11:11:00.000190  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:00.001605  150723 main.go:141] libmachine: (ha-928358) define libvirt domain using xml: 
	I1028 11:11:00.001643  150723 main.go:141] libmachine: (ha-928358) <domain type='kvm'>
	I1028 11:11:00.001657  150723 main.go:141] libmachine: (ha-928358)   <name>ha-928358</name>
	I1028 11:11:00.001672  150723 main.go:141] libmachine: (ha-928358)   <memory unit='MiB'>2200</memory>
	I1028 11:11:00.001685  150723 main.go:141] libmachine: (ha-928358)   <vcpu>2</vcpu>
	I1028 11:11:00.001693  150723 main.go:141] libmachine: (ha-928358)   <features>
	I1028 11:11:00.001703  150723 main.go:141] libmachine: (ha-928358)     <acpi/>
	I1028 11:11:00.001711  150723 main.go:141] libmachine: (ha-928358)     <apic/>
	I1028 11:11:00.001724  150723 main.go:141] libmachine: (ha-928358)     <pae/>
	I1028 11:11:00.001748  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.001760  150723 main.go:141] libmachine: (ha-928358)   </features>
	I1028 11:11:00.001770  150723 main.go:141] libmachine: (ha-928358)   <cpu mode='host-passthrough'>
	I1028 11:11:00.001783  150723 main.go:141] libmachine: (ha-928358)   
	I1028 11:11:00.001795  150723 main.go:141] libmachine: (ha-928358)   </cpu>
	I1028 11:11:00.001806  150723 main.go:141] libmachine: (ha-928358)   <os>
	I1028 11:11:00.001820  150723 main.go:141] libmachine: (ha-928358)     <type>hvm</type>
	I1028 11:11:00.001839  150723 main.go:141] libmachine: (ha-928358)     <boot dev='cdrom'/>
	I1028 11:11:00.001851  150723 main.go:141] libmachine: (ha-928358)     <boot dev='hd'/>
	I1028 11:11:00.001863  150723 main.go:141] libmachine: (ha-928358)     <bootmenu enable='no'/>
	I1028 11:11:00.001872  150723 main.go:141] libmachine: (ha-928358)   </os>
	I1028 11:11:00.001884  150723 main.go:141] libmachine: (ha-928358)   <devices>
	I1028 11:11:00.001898  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='cdrom'>
	I1028 11:11:00.001919  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/boot2docker.iso'/>
	I1028 11:11:00.001933  150723 main.go:141] libmachine: (ha-928358)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:00.001968  150723 main.go:141] libmachine: (ha-928358)       <readonly/>
	I1028 11:11:00.001991  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002008  150723 main.go:141] libmachine: (ha-928358)     <disk type='file' device='disk'>
	I1028 11:11:00.002023  150723 main.go:141] libmachine: (ha-928358)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:00.002044  150723 main.go:141] libmachine: (ha-928358)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/ha-928358.rawdisk'/>
	I1028 11:11:00.002058  150723 main.go:141] libmachine: (ha-928358)       <target dev='hda' bus='virtio'/>
	I1028 11:11:00.002070  150723 main.go:141] libmachine: (ha-928358)     </disk>
	I1028 11:11:00.002106  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002133  150723 main.go:141] libmachine: (ha-928358)       <source network='mk-ha-928358'/>
	I1028 11:11:00.002148  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002159  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002172  150723 main.go:141] libmachine: (ha-928358)     <interface type='network'>
	I1028 11:11:00.002179  150723 main.go:141] libmachine: (ha-928358)       <source network='default'/>
	I1028 11:11:00.002190  150723 main.go:141] libmachine: (ha-928358)       <model type='virtio'/>
	I1028 11:11:00.002197  150723 main.go:141] libmachine: (ha-928358)     </interface>
	I1028 11:11:00.002206  150723 main.go:141] libmachine: (ha-928358)     <serial type='pty'>
	I1028 11:11:00.002210  150723 main.go:141] libmachine: (ha-928358)       <target port='0'/>
	I1028 11:11:00.002216  150723 main.go:141] libmachine: (ha-928358)     </serial>
	I1028 11:11:00.002226  150723 main.go:141] libmachine: (ha-928358)     <console type='pty'>
	I1028 11:11:00.002250  150723 main.go:141] libmachine: (ha-928358)       <target type='serial' port='0'/>
	I1028 11:11:00.002282  150723 main.go:141] libmachine: (ha-928358)     </console>
	I1028 11:11:00.002291  150723 main.go:141] libmachine: (ha-928358)     <rng model='virtio'>
	I1028 11:11:00.002297  150723 main.go:141] libmachine: (ha-928358)       <backend model='random'>/dev/random</backend>
	I1028 11:11:00.002303  150723 main.go:141] libmachine: (ha-928358)     </rng>
	I1028 11:11:00.002306  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002311  150723 main.go:141] libmachine: (ha-928358)     
	I1028 11:11:00.002318  150723 main.go:141] libmachine: (ha-928358)   </devices>
	I1028 11:11:00.002323  150723 main.go:141] libmachine: (ha-928358) </domain>
	I1028 11:11:00.002328  150723 main.go:141] libmachine: (ha-928358) 
	I1028 11:11:00.006810  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:30:04:d3 in network default
	I1028 11:11:00.007391  150723 main.go:141] libmachine: (ha-928358) Ensuring networks are active...
	I1028 11:11:00.007412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:00.008229  150723 main.go:141] libmachine: (ha-928358) Ensuring network default is active
	I1028 11:11:00.008655  150723 main.go:141] libmachine: (ha-928358) Ensuring network mk-ha-928358 is active
	I1028 11:11:00.009320  150723 main.go:141] libmachine: (ha-928358) Getting domain xml...
	I1028 11:11:00.010062  150723 main.go:141] libmachine: (ha-928358) Creating domain...
	I1028 11:11:01.218137  150723 main.go:141] libmachine: (ha-928358) Waiting to get IP...
	I1028 11:11:01.218922  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.219337  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.219385  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.219330  150746 retry.go:31] will retry after 310.252899ms: waiting for machine to come up
	I1028 11:11:01.530950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.531414  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.531437  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.531371  150746 retry.go:31] will retry after 282.464528ms: waiting for machine to come up
	I1028 11:11:01.815720  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:01.816159  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:01.816184  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:01.816121  150746 retry.go:31] will retry after 304.583775ms: waiting for machine to come up
	I1028 11:11:02.122718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.123224  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.123251  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.123154  150746 retry.go:31] will retry after 442.531578ms: waiting for machine to come up
	I1028 11:11:02.566777  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:02.567197  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:02.567222  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:02.567162  150746 retry.go:31] will retry after 677.799642ms: waiting for machine to come up
	I1028 11:11:03.246160  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.246663  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.246691  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.246611  150746 retry.go:31] will retry after 661.382392ms: waiting for machine to come up
	I1028 11:11:03.909443  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:03.909955  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:03.910006  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:03.909898  150746 retry.go:31] will retry after 1.086932803s: waiting for machine to come up
	I1028 11:11:04.997802  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:04.998295  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:04.998322  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:04.998231  150746 retry.go:31] will retry after 1.028978753s: waiting for machine to come up
	I1028 11:11:06.028312  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:06.028699  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:06.028724  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:06.028658  150746 retry.go:31] will retry after 1.229241603s: waiting for machine to come up
	I1028 11:11:07.259043  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:07.259415  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:07.259442  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:07.259356  150746 retry.go:31] will retry after 1.621101278s: waiting for machine to come up
	I1028 11:11:08.882760  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:08.883130  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:08.883166  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:08.883106  150746 retry.go:31] will retry after 2.010099388s: waiting for machine to come up
	I1028 11:11:10.894594  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:10.895005  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:10.895028  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:10.894965  150746 retry.go:31] will retry after 2.268994964s: waiting for machine to come up
	I1028 11:11:13.166469  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:13.166906  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:13.166930  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:13.166853  150746 retry.go:31] will retry after 2.964491157s: waiting for machine to come up
	I1028 11:11:16.134568  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:16.135014  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:16.135030  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:16.134978  150746 retry.go:31] will retry after 3.669669561s: waiting for machine to come up
	I1028 11:11:19.805844  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:19.806451  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find current IP address of domain ha-928358 in network mk-ha-928358
	I1028 11:11:19.806483  150723 main.go:141] libmachine: (ha-928358) DBG | I1028 11:11:19.806402  150746 retry.go:31] will retry after 6.986761695s: waiting for machine to come up
	I1028 11:11:26.796618  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797199  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has current primary IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.797228  150723 main.go:141] libmachine: (ha-928358) Found IP for machine: 192.168.39.206
	I1028 11:11:26.797258  150723 main.go:141] libmachine: (ha-928358) Reserving static IP address...
	I1028 11:11:26.797624  150723 main.go:141] libmachine: (ha-928358) DBG | unable to find host DHCP lease matching {name: "ha-928358", mac: "52:54:00:dd:b2:b7", ip: "192.168.39.206"} in network mk-ha-928358
	I1028 11:11:26.873582  150723 main.go:141] libmachine: (ha-928358) Reserved static IP address: 192.168.39.206
	I1028 11:11:26.873609  150723 main.go:141] libmachine: (ha-928358) Waiting for SSH to be available...
	I1028 11:11:26.873619  150723 main.go:141] libmachine: (ha-928358) DBG | Getting to WaitForSSH function...
	I1028 11:11:26.876283  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876750  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:26.876781  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:26.876886  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH client type: external
	I1028 11:11:26.876901  150723 main.go:141] libmachine: (ha-928358) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa (-rw-------)
	I1028 11:11:26.876929  150723 main.go:141] libmachine: (ha-928358) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:11:26.876941  150723 main.go:141] libmachine: (ha-928358) DBG | About to run SSH command:
	I1028 11:11:26.876952  150723 main.go:141] libmachine: (ha-928358) DBG | exit 0
	I1028 11:11:27.009708  150723 main.go:141] libmachine: (ha-928358) DBG | SSH cmd err, output: <nil>: 
	I1028 11:11:27.010071  150723 main.go:141] libmachine: (ha-928358) KVM machine creation complete!
	I1028 11:11:27.010352  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:27.010925  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011146  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:27.011301  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:11:27.011311  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:27.012679  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:11:27.012693  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:11:27.012699  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:11:27.012704  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.014867  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015214  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.015263  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.015327  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.015507  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015644  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.015739  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.015911  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.016106  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.016117  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:11:27.128876  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.128903  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:11:27.128915  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.131646  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132081  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.132109  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.132331  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.132525  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132697  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.132852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.133070  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.133229  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.133242  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:11:27.250569  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:11:27.250647  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:11:27.250657  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:11:27.250664  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.250929  150723 buildroot.go:166] provisioning hostname "ha-928358"
	I1028 11:11:27.250971  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.251130  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.253765  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254120  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.254146  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.254297  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.254451  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254601  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.254758  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.254909  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.255102  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.255118  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358 && echo "ha-928358" | sudo tee /etc/hostname
	I1028 11:11:27.384932  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:11:27.384962  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.387904  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388215  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.388243  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.388516  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.388719  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.388884  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.389002  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.389152  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.389334  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.389355  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:11:27.516473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:11:27.516502  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:11:27.516519  150723 buildroot.go:174] setting up certificates
	I1028 11:11:27.516529  150723 provision.go:84] configureAuth start
	I1028 11:11:27.516537  150723 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:11:27.516866  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:27.519682  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520053  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.520077  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.520298  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.522648  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.522984  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.523022  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.523127  150723 provision.go:143] copyHostCerts
	I1028 11:11:27.523161  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523220  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:11:27.523235  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:11:27.523317  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:11:27.523418  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523442  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:11:27.523451  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:11:27.523494  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:11:27.523565  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523591  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:11:27.523600  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:11:27.523634  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:11:27.523699  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358 san=[127.0.0.1 192.168.39.206 ha-928358 localhost minikube]
	I1028 11:11:27.652184  150723 provision.go:177] copyRemoteCerts
	I1028 11:11:27.652239  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:11:27.652263  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.655247  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655509  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.655537  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.655747  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.655942  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.656141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.656367  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:27.747959  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:11:27.748026  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:11:27.773785  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:11:27.773875  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 11:11:27.798172  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:11:27.798246  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:11:27.823795  150723 provision.go:87] duration metric: took 307.251687ms to configureAuth
	I1028 11:11:27.823824  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:11:27.823999  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:27.824098  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:27.826733  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827058  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:27.827095  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:27.827231  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:27.827430  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827593  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:27.827720  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:27.827882  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:27.828064  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:27.828082  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:11:28.063521  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:11:28.063544  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:11:28.063563  150723 main.go:141] libmachine: (ha-928358) Calling .GetURL
	I1028 11:11:28.064889  150723 main.go:141] libmachine: (ha-928358) DBG | Using libvirt version 6000000
	I1028 11:11:28.067440  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.067909  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.067936  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.068169  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:11:28.068184  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:11:28.068190  150723 client.go:171] duration metric: took 28.511205055s to LocalClient.Create
	I1028 11:11:28.068213  150723 start.go:167] duration metric: took 28.511273119s to libmachine.API.Create "ha-928358"
	I1028 11:11:28.068224  150723 start.go:293] postStartSetup for "ha-928358" (driver="kvm2")
	I1028 11:11:28.068234  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:11:28.068250  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.068499  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:11:28.068524  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.070718  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071018  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.071047  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.071207  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.071391  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.071596  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.071768  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.160093  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:11:28.164580  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:11:28.164611  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:11:28.164677  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:11:28.164753  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:11:28.164768  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:11:28.164860  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:11:28.174780  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:28.200051  150723 start.go:296] duration metric: took 131.810016ms for postStartSetup
	I1028 11:11:28.200113  150723 main.go:141] libmachine: (ha-928358) Calling .GetConfigRaw
	I1028 11:11:28.200681  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.203634  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204015  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.204039  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.204248  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:28.204459  150723 start.go:128] duration metric: took 28.665968765s to createHost
	I1028 11:11:28.204486  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.206915  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207241  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.207270  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.207406  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.207565  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207714  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.207841  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.207995  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:11:28.208148  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:11:28.208158  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:11:28.326642  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113888.306870077
	
	I1028 11:11:28.326664  150723 fix.go:216] guest clock: 1730113888.306870077
	I1028 11:11:28.326674  150723 fix.go:229] Guest: 2024-10-28 11:11:28.306870077 +0000 UTC Remote: 2024-10-28 11:11:28.204471945 +0000 UTC m=+28.781211208 (delta=102.398132ms)
	I1028 11:11:28.326699  150723 fix.go:200] guest clock delta is within tolerance: 102.398132ms
	I1028 11:11:28.326706  150723 start.go:83] releasing machines lock for "ha-928358", held for 28.788289196s
	I1028 11:11:28.326726  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.327001  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:28.329581  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.329968  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.330003  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.330168  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330728  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330884  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:28.330998  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:11:28.331060  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.331115  150723 ssh_runner.go:195] Run: cat /version.json
	I1028 11:11:28.331141  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:28.333639  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.333966  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.333994  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334015  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334246  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334387  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:28.334412  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:28.334416  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334585  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:28.334627  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.334755  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:28.334771  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.334927  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:28.335084  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:28.419255  150723 ssh_runner.go:195] Run: systemctl --version
	I1028 11:11:28.450377  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:11:28.614960  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:11:28.621690  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:11:28.621762  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:11:28.640026  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:11:28.640058  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:11:28.640161  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:11:28.657821  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:11:28.673308  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:11:28.673372  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:11:28.688651  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:11:28.704016  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:11:28.829012  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:11:28.990202  150723 docker.go:233] disabling docker service ...
	I1028 11:11:28.990264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:11:29.006016  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:11:29.019798  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:11:29.148701  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:11:29.286836  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:11:29.301306  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:11:29.321180  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:11:29.321242  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.332417  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:11:29.332516  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.344116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.355229  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.366386  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:11:29.377683  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.388680  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.406712  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:11:29.418602  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:11:29.428422  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:11:29.428489  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:11:29.442860  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:11:29.453466  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:29.587618  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:11:29.702292  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:11:29.702379  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:11:29.708037  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:11:29.708101  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:11:29.712169  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:11:29.760681  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:11:29.760781  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.793958  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:11:29.827829  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:11:29.829108  150723 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:11:29.831950  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832308  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:29.832337  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:29.832530  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:11:29.837077  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:29.850764  150723 kubeadm.go:883] updating cluster {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:11:29.850982  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:29.851067  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:29.884186  150723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 11:11:29.884257  150723 ssh_runner.go:195] Run: which lz4
	I1028 11:11:29.888297  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1028 11:11:29.888406  150723 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:11:29.892595  150723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:11:29.892630  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 11:11:31.364550  150723 crio.go:462] duration metric: took 1.47616531s to copy over tarball
	I1028 11:11:31.364646  150723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:11:33.492729  150723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.128048416s)
	I1028 11:11:33.492765  150723 crio.go:469] duration metric: took 2.12817379s to extract the tarball
	I1028 11:11:33.492775  150723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:11:33.530789  150723 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:11:33.576388  150723 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:11:33.576418  150723 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:11:33.576428  150723 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1028 11:11:33.576525  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:11:33.576597  150723 ssh_runner.go:195] Run: crio config
	I1028 11:11:33.628433  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:33.628457  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:33.628468  150723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:11:33.628490  150723 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-928358 NodeName:ha-928358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:11:33.628623  150723 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-928358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:11:33.628649  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:11:33.628693  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:11:33.645502  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:11:33.645637  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:11:33.645712  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:11:33.657169  150723 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:11:33.657234  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:11:33.668705  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:11:33.687712  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:11:33.707287  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:11:33.725968  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1028 11:11:33.745306  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:11:33.749954  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:11:33.764379  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:11:33.885154  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:11:33.902745  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.206
	I1028 11:11:33.902769  150723 certs.go:194] generating shared ca certs ...
	I1028 11:11:33.902784  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:33.902965  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:11:33.903024  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:11:33.903039  150723 certs.go:256] generating profile certs ...
	I1028 11:11:33.903106  150723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:11:33.903126  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt with IP's: []
	I1028 11:11:34.090717  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt ...
	I1028 11:11:34.090747  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt: {Name:mk3976b6be27fc4f31aa39dbf48c0afa90955478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.090957  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key ...
	I1028 11:11:34.090981  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key: {Name:mk302db81268b764894e98d850b90eaaced7a15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.091101  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923
	I1028 11:11:34.091124  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.254]
	I1028 11:11:34.335900  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 ...
	I1028 11:11:34.335935  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923: {Name:mk0008343e6fdd7a08b2d031f0ba617f7a66f590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336144  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 ...
	I1028 11:11:34.336163  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923: {Name:mkd6c56ea43ae5fd58d0e46e3c3070e385813140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.336286  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:11:34.336450  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.a4d1a923 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:11:34.336537  150723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:11:34.336559  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt with IP's: []
	I1028 11:11:34.464000  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt ...
	I1028 11:11:34.464029  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt: {Name:mkb9ddbbbcf10a07648ff0910f8f6f99edd94a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464231  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key ...
	I1028 11:11:34.464247  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key: {Name:mk17d0ad23ae67dc57b4cfd6ae702fbcda30c491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:34.464343  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:11:34.464369  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:11:34.464389  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:11:34.464407  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:11:34.464422  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:11:34.464435  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:11:34.464453  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:11:34.464472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:11:34.464549  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:11:34.464601  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:11:34.464617  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:11:34.464647  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:11:34.464682  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:11:34.464714  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:11:34.464766  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:11:34.464809  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.464829  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.464844  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.465667  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:11:34.492761  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:11:34.519090  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:11:34.544886  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:11:34.571307  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:11:34.596836  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:11:34.622460  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:11:34.648376  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:11:34.677988  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:11:34.708308  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:11:34.732512  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:11:34.757152  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:11:34.774559  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:11:34.780665  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:11:34.792209  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797675  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.797733  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:11:34.804182  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:11:34.816617  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:11:34.829067  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834000  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.834062  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:11:34.840080  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:11:34.851913  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:11:34.863842  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868862  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.868942  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:11:34.875065  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:11:34.888703  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:11:34.893205  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:11:34.893271  150723 kubeadm.go:392] StartCluster: {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:11:34.893354  150723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:11:34.893425  150723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:11:34.932903  150723 cri.go:89] found id: ""
	I1028 11:11:34.932974  150723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:11:34.944526  150723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:11:34.956312  150723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:11:34.967457  150723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:11:34.967484  150723 kubeadm.go:157] found existing configuration files:
	
	I1028 11:11:34.967537  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:11:34.977810  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:11:34.977875  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:11:34.988232  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:11:34.998184  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:11:34.998247  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:11:35.008728  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.018729  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:11:35.018793  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:11:35.029800  150723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:11:35.040304  150723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:11:35.040357  150723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:11:35.050830  150723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 11:11:35.164435  150723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 11:11:35.164499  150723 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 11:11:35.281374  150723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 11:11:35.281556  150723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 11:11:35.281686  150723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 11:11:35.294386  150723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 11:11:35.479371  150723 out.go:235]   - Generating certificates and keys ...
	I1028 11:11:35.479512  150723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 11:11:35.479602  150723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 11:11:35.531977  150723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 11:11:35.706199  150723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 11:11:35.805605  150723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 11:11:35.955545  150723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 11:11:36.024313  150723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 11:11:36.024446  150723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.166366  150723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 11:11:36.166553  150723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-928358 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1028 11:11:36.477451  150723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 11:11:36.529937  150723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 11:11:36.764928  150723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 11:11:36.765199  150723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 11:11:36.958542  150723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 11:11:37.098519  150723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 11:11:37.432447  150723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 11:11:37.510265  150723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 11:11:37.727523  150723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 11:11:37.728159  150723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 11:11:37.734975  150723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 11:11:37.736761  150723 out.go:235]   - Booting up control plane ...
	I1028 11:11:37.736891  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 11:11:37.737036  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 11:11:37.737392  150723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 11:11:37.761460  150723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 11:11:37.769245  150723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 11:11:37.769327  150723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 11:11:37.901440  150723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 11:11:37.901605  150723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 11:11:38.403804  150723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.460314ms
	I1028 11:11:38.403927  150723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 11:11:44.555956  150723 kubeadm.go:310] [api-check] The API server is healthy after 6.1544774s
	I1028 11:11:44.584149  150723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 11:11:44.607891  150723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 11:11:44.647415  150723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 11:11:44.647602  150723 kubeadm.go:310] [mark-control-plane] Marking the node ha-928358 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 11:11:44.670940  150723 kubeadm.go:310] [bootstrap-token] Using token: 7u74ui.ti422fa98pbd45zp
	I1028 11:11:44.672724  150723 out.go:235]   - Configuring RBAC rules ...
	I1028 11:11:44.672861  150723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 11:11:44.681325  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 11:11:44.701467  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 11:11:44.720481  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 11:11:44.731591  150723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 11:11:44.743611  150723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 11:11:44.968060  150723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 11:11:45.411017  150723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 11:11:45.970736  150723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 11:11:45.970791  150723 kubeadm.go:310] 
	I1028 11:11:45.970885  150723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 11:11:45.970911  150723 kubeadm.go:310] 
	I1028 11:11:45.971033  150723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 11:11:45.971045  150723 kubeadm.go:310] 
	I1028 11:11:45.971081  150723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 11:11:45.971155  150723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 11:11:45.971234  150723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 11:11:45.971246  150723 kubeadm.go:310] 
	I1028 11:11:45.971327  150723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 11:11:45.971346  150723 kubeadm.go:310] 
	I1028 11:11:45.971421  150723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 11:11:45.971432  150723 kubeadm.go:310] 
	I1028 11:11:45.971526  150723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 11:11:45.971668  150723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 11:11:45.971782  150723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 11:11:45.971802  150723 kubeadm.go:310] 
	I1028 11:11:45.971912  150723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 11:11:45.972050  150723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 11:11:45.972078  150723 kubeadm.go:310] 
	I1028 11:11:45.972201  150723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972360  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 11:11:45.972397  150723 kubeadm.go:310] 	--control-plane 
	I1028 11:11:45.972407  150723 kubeadm.go:310] 
	I1028 11:11:45.972546  150723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 11:11:45.972563  150723 kubeadm.go:310] 
	I1028 11:11:45.972685  150723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7u74ui.ti422fa98pbd45zp \
	I1028 11:11:45.972831  150723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 11:11:45.973046  150723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 11:11:45.973098  150723 cni.go:84] Creating CNI manager for ""
	I1028 11:11:45.973115  150723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1028 11:11:45.975136  150723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1028 11:11:45.976845  150723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1028 11:11:45.982665  150723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1028 11:11:45.982687  150723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1028 11:11:46.004414  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1028 11:11:46.391016  150723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:11:46.391108  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.391153  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358 minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=true
	I1028 11:11:46.556219  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:46.556239  150723 ops.go:34] apiserver oom_adj: -16
	I1028 11:11:47.056803  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:47.556401  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.057031  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:48.556648  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.056531  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:49.556278  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.056341  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.557096  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 11:11:50.688176  150723 kubeadm.go:1113] duration metric: took 4.297146148s to wait for elevateKubeSystemPrivileges
	I1028 11:11:50.688219  150723 kubeadm.go:394] duration metric: took 15.794958001s to StartCluster
	I1028 11:11:50.688240  150723 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.688317  150723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.689020  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:11:50.689264  150723 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:50.689283  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1028 11:11:50.689310  150723 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:11:50.689399  150723 addons.go:69] Setting storage-provisioner=true in profile "ha-928358"
	I1028 11:11:50.689294  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:11:50.689432  150723 addons.go:69] Setting default-storageclass=true in profile "ha-928358"
	I1028 11:11:50.689434  150723 addons.go:234] Setting addon storage-provisioner=true in "ha-928358"
	I1028 11:11:50.689444  150723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-928358"
	I1028 11:11:50.689473  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.689502  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:50.689978  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690024  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.690030  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.690078  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.705787  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I1028 11:11:50.705799  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1028 11:11:50.706396  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706425  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.706943  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.706961  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707116  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.707141  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.707344  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707538  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.707605  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.708242  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.708286  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.709865  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:11:50.710123  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:11:50.710718  150723 addons.go:234] Setting addon default-storageclass=true in "ha-928358"
	I1028 11:11:50.710749  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:11:50.710982  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.711007  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.711160  150723 cert_rotation.go:140] Starting client certificate rotation controller
	I1028 11:11:50.724777  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I1028 11:11:50.725295  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.725751  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I1028 11:11:50.725906  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.725930  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.726287  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.726327  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.726526  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.726809  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.726831  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.727169  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.727730  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:50.727777  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:50.728384  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.730334  150723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:11:50.731788  150723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:50.731810  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:11:50.731829  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.735112  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735661  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.735681  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.735902  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.736091  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.736234  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.736386  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.743829  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I1028 11:11:50.744355  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:50.744925  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:50.744949  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:50.745276  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:50.745461  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:11:50.747144  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:11:50.747358  150723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.747374  150723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:11:50.747388  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:11:50.749934  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750358  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:11:50.750397  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:11:50.750503  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:11:50.750676  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:11:50.750813  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:11:50.750942  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:11:50.872575  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1028 11:11:50.921646  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:11:50.984303  150723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:11:51.311574  150723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1028 11:11:51.359517  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.359546  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.359929  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.359938  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.359978  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.359992  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.360011  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.360266  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.360332  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.360347  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.360405  150723 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:11:51.360435  150723 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:11:51.360539  150723 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1028 11:11:51.360552  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.360564  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.360580  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.370574  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:11:51.371224  150723 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1028 11:11:51.371242  150723 round_trippers.go:469] Request Headers:
	I1028 11:11:51.371253  150723 round_trippers.go:473]     Content-Type: application/json
	I1028 11:11:51.371260  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:11:51.371264  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:11:51.378842  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:11:51.379088  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.379107  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.379391  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.379407  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.723667  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.723697  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724015  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724061  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.724071  150723 main.go:141] libmachine: Making call to close driver server
	I1028 11:11:51.724078  150723 main.go:141] libmachine: (ha-928358) Calling .Close
	I1028 11:11:51.724024  150723 main.go:141] libmachine: (ha-928358) DBG | Closing plugin on server side
	I1028 11:11:51.724319  150723 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:11:51.724335  150723 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:11:51.726167  150723 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1028 11:11:51.727603  150723 addons.go:510] duration metric: took 1.038296123s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 11:11:51.727646  150723 start.go:246] waiting for cluster config update ...
	I1028 11:11:51.727661  150723 start.go:255] writing updated cluster config ...
	I1028 11:11:51.729506  150723 out.go:201] 
	I1028 11:11:51.731166  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:11:51.731233  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.732989  150723 out.go:177] * Starting "ha-928358-m02" control-plane node in "ha-928358" cluster
	I1028 11:11:51.734422  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:11:51.734443  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:11:51.734539  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:11:51.734550  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:11:51.734619  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:11:51.734790  150723 start.go:360] acquireMachinesLock for ha-928358-m02: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:11:51.734834  150723 start.go:364] duration metric: took 28.788µs to acquireMachinesLock for "ha-928358-m02"
	I1028 11:11:51.734851  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:11:51.734918  150723 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1028 11:11:51.736531  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:11:51.736608  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:11:51.736641  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:11:51.751347  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I1028 11:11:51.751714  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:11:51.752299  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:11:51.752328  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:11:51.752603  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:11:51.752792  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:11:51.752934  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:11:51.753123  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:11:51.753174  150723 client.go:168] LocalClient.Create starting
	I1028 11:11:51.753215  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:11:51.753263  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753289  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753362  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:11:51.753389  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:11:51.753404  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:11:51.753437  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:11:51.753449  150723 main.go:141] libmachine: (ha-928358-m02) Calling .PreCreateCheck
	I1028 11:11:51.753595  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:11:51.754006  150723 main.go:141] libmachine: Creating machine...
	I1028 11:11:51.754022  150723 main.go:141] libmachine: (ha-928358-m02) Calling .Create
	I1028 11:11:51.754205  150723 main.go:141] libmachine: (ha-928358-m02) Creating KVM machine...
	I1028 11:11:51.755415  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing default KVM network
	I1028 11:11:51.755582  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found existing private KVM network mk-ha-928358
	I1028 11:11:51.755707  150723 main.go:141] libmachine: (ha-928358-m02) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:51.755730  150723 main.go:141] libmachine: (ha-928358-m02) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:11:51.755821  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.755707  151103 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:51.755971  150723 main.go:141] libmachine: (ha-928358-m02) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:11:51.993174  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:51.993039  151103 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa...
	I1028 11:11:52.383008  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.382864  151103 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk...
	I1028 11:11:52.383053  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing magic tar header
	I1028 11:11:52.383094  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Writing SSH key tar header
	I1028 11:11:52.383117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:52.383029  151103 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 ...
	I1028 11:11:52.383167  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02
	I1028 11:11:52.383203  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:11:52.383214  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02 (perms=drwx------)
	I1028 11:11:52.383224  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:11:52.383237  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:11:52.383258  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:11:52.383272  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:11:52.383295  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:11:52.383304  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:11:52.383313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:11:52.383324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:11:52.383332  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Checking permissions on dir: /home
	I1028 11:11:52.383343  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Skipping /home - not owner
	I1028 11:11:52.383370  150723 main.go:141] libmachine: (ha-928358-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:11:52.383390  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:52.384348  150723 main.go:141] libmachine: (ha-928358-m02) define libvirt domain using xml: 
	I1028 11:11:52.384373  150723 main.go:141] libmachine: (ha-928358-m02) <domain type='kvm'>
	I1028 11:11:52.384400  150723 main.go:141] libmachine: (ha-928358-m02)   <name>ha-928358-m02</name>
	I1028 11:11:52.384412  150723 main.go:141] libmachine: (ha-928358-m02)   <memory unit='MiB'>2200</memory>
	I1028 11:11:52.384426  150723 main.go:141] libmachine: (ha-928358-m02)   <vcpu>2</vcpu>
	I1028 11:11:52.384436  150723 main.go:141] libmachine: (ha-928358-m02)   <features>
	I1028 11:11:52.384457  150723 main.go:141] libmachine: (ha-928358-m02)     <acpi/>
	I1028 11:11:52.384472  150723 main.go:141] libmachine: (ha-928358-m02)     <apic/>
	I1028 11:11:52.384478  150723 main.go:141] libmachine: (ha-928358-m02)     <pae/>
	I1028 11:11:52.384482  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384490  150723 main.go:141] libmachine: (ha-928358-m02)   </features>
	I1028 11:11:52.384494  150723 main.go:141] libmachine: (ha-928358-m02)   <cpu mode='host-passthrough'>
	I1028 11:11:52.384501  150723 main.go:141] libmachine: (ha-928358-m02)   
	I1028 11:11:52.384506  150723 main.go:141] libmachine: (ha-928358-m02)   </cpu>
	I1028 11:11:52.384511  150723 main.go:141] libmachine: (ha-928358-m02)   <os>
	I1028 11:11:52.384516  150723 main.go:141] libmachine: (ha-928358-m02)     <type>hvm</type>
	I1028 11:11:52.384522  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='cdrom'/>
	I1028 11:11:52.384526  150723 main.go:141] libmachine: (ha-928358-m02)     <boot dev='hd'/>
	I1028 11:11:52.384531  150723 main.go:141] libmachine: (ha-928358-m02)     <bootmenu enable='no'/>
	I1028 11:11:52.384537  150723 main.go:141] libmachine: (ha-928358-m02)   </os>
	I1028 11:11:52.384561  150723 main.go:141] libmachine: (ha-928358-m02)   <devices>
	I1028 11:11:52.384580  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='cdrom'>
	I1028 11:11:52.384598  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/boot2docker.iso'/>
	I1028 11:11:52.384615  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hdc' bus='scsi'/>
	I1028 11:11:52.384624  150723 main.go:141] libmachine: (ha-928358-m02)       <readonly/>
	I1028 11:11:52.384628  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384634  150723 main.go:141] libmachine: (ha-928358-m02)     <disk type='file' device='disk'>
	I1028 11:11:52.384642  150723 main.go:141] libmachine: (ha-928358-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:11:52.384650  150723 main.go:141] libmachine: (ha-928358-m02)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/ha-928358-m02.rawdisk'/>
	I1028 11:11:52.384657  150723 main.go:141] libmachine: (ha-928358-m02)       <target dev='hda' bus='virtio'/>
	I1028 11:11:52.384661  150723 main.go:141] libmachine: (ha-928358-m02)     </disk>
	I1028 11:11:52.384668  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384674  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='mk-ha-928358'/>
	I1028 11:11:52.384681  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384688  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384692  150723 main.go:141] libmachine: (ha-928358-m02)     <interface type='network'>
	I1028 11:11:52.384698  150723 main.go:141] libmachine: (ha-928358-m02)       <source network='default'/>
	I1028 11:11:52.384703  150723 main.go:141] libmachine: (ha-928358-m02)       <model type='virtio'/>
	I1028 11:11:52.384708  150723 main.go:141] libmachine: (ha-928358-m02)     </interface>
	I1028 11:11:52.384713  150723 main.go:141] libmachine: (ha-928358-m02)     <serial type='pty'>
	I1028 11:11:52.384742  150723 main.go:141] libmachine: (ha-928358-m02)       <target port='0'/>
	I1028 11:11:52.384769  150723 main.go:141] libmachine: (ha-928358-m02)     </serial>
	I1028 11:11:52.384791  150723 main.go:141] libmachine: (ha-928358-m02)     <console type='pty'>
	I1028 11:11:52.384814  150723 main.go:141] libmachine: (ha-928358-m02)       <target type='serial' port='0'/>
	I1028 11:11:52.384828  150723 main.go:141] libmachine: (ha-928358-m02)     </console>
	I1028 11:11:52.384840  150723 main.go:141] libmachine: (ha-928358-m02)     <rng model='virtio'>
	I1028 11:11:52.384852  150723 main.go:141] libmachine: (ha-928358-m02)       <backend model='random'>/dev/random</backend>
	I1028 11:11:52.384859  150723 main.go:141] libmachine: (ha-928358-m02)     </rng>
	I1028 11:11:52.384865  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384887  150723 main.go:141] libmachine: (ha-928358-m02)     
	I1028 11:11:52.384900  150723 main.go:141] libmachine: (ha-928358-m02)   </devices>
	I1028 11:11:52.384910  150723 main.go:141] libmachine: (ha-928358-m02) </domain>
	I1028 11:11:52.384921  150723 main.go:141] libmachine: (ha-928358-m02) 
	I1028 11:11:52.391941  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:67:49 in network default
	I1028 11:11:52.392560  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring networks are active...
	I1028 11:11:52.392579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:52.393436  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network default is active
	I1028 11:11:52.393821  150723 main.go:141] libmachine: (ha-928358-m02) Ensuring network mk-ha-928358 is active
	I1028 11:11:52.394171  150723 main.go:141] libmachine: (ha-928358-m02) Getting domain xml...
	I1028 11:11:52.394853  150723 main.go:141] libmachine: (ha-928358-m02) Creating domain...
	I1028 11:11:53.630024  150723 main.go:141] libmachine: (ha-928358-m02) Waiting to get IP...
	I1028 11:11:53.630962  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.631449  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.631495  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.631430  151103 retry.go:31] will retry after 231.171985ms: waiting for machine to come up
	I1028 11:11:53.864111  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:53.864512  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:53.864546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:53.864499  151103 retry.go:31] will retry after 296.507043ms: waiting for machine to come up
	I1028 11:11:54.163050  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.163543  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.163593  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.163496  151103 retry.go:31] will retry after 357.855811ms: waiting for machine to come up
	I1028 11:11:54.523089  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:54.523546  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:54.523575  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:54.523481  151103 retry.go:31] will retry after 569.003787ms: waiting for machine to come up
	I1028 11:11:55.094333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.094770  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.094795  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.094741  151103 retry.go:31] will retry after 495.310626ms: waiting for machine to come up
	I1028 11:11:55.591480  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:55.592037  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:55.592065  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:55.591984  151103 retry.go:31] will retry after 697.027358ms: waiting for machine to come up
	I1028 11:11:56.291011  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:56.291427  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:56.291455  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:56.291390  151103 retry.go:31] will retry after 819.98241ms: waiting for machine to come up
	I1028 11:11:57.112476  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:57.112920  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:57.112950  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:57.112861  151103 retry.go:31] will retry after 1.468451423s: waiting for machine to come up
	I1028 11:11:58.582633  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:11:58.583095  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:11:58.583117  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:11:58.583044  151103 retry.go:31] will retry after 1.732332827s: waiting for machine to come up
	I1028 11:12:00.316579  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:00.316974  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:00.317005  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:00.316915  151103 retry.go:31] will retry after 1.701246598s: waiting for machine to come up
	I1028 11:12:02.020279  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:02.020762  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:02.020780  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:02.020732  151103 retry.go:31] will retry after 2.239954262s: waiting for machine to come up
	I1028 11:12:04.262705  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:04.263103  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:04.263134  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:04.263076  151103 retry.go:31] will retry after 3.584543805s: waiting for machine to come up
	I1028 11:12:07.848824  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:07.849223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:07.849246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:07.849186  151103 retry.go:31] will retry after 4.083747812s: waiting for machine to come up
	I1028 11:12:11.934986  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:11.935519  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find current IP address of domain ha-928358-m02 in network mk-ha-928358
	I1028 11:12:11.935541  150723 main.go:141] libmachine: (ha-928358-m02) DBG | I1028 11:12:11.935464  151103 retry.go:31] will retry after 5.450262186s: waiting for machine to come up
	I1028 11:12:17.387598  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388014  150723 main.go:141] libmachine: (ha-928358-m02) Found IP for machine: 192.168.39.15
	I1028 11:12:17.388040  150723 main.go:141] libmachine: (ha-928358-m02) Reserving static IP address...
	I1028 11:12:17.388061  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has current primary IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.388484  150723 main.go:141] libmachine: (ha-928358-m02) DBG | unable to find host DHCP lease matching {name: "ha-928358-m02", mac: "52:54:00:6f:70:28", ip: "192.168.39.15"} in network mk-ha-928358
	I1028 11:12:17.468628  150723 main.go:141] libmachine: (ha-928358-m02) Reserved static IP address: 192.168.39.15
	I1028 11:12:17.468659  150723 main.go:141] libmachine: (ha-928358-m02) Waiting for SSH to be available...
	I1028 11:12:17.468668  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Getting to WaitForSSH function...
	I1028 11:12:17.471501  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472007  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.472034  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.472218  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH client type: external
	I1028 11:12:17.472251  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa (-rw-------)
	I1028 11:12:17.472281  150723 main.go:141] libmachine: (ha-928358-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:12:17.472296  150723 main.go:141] libmachine: (ha-928358-m02) DBG | About to run SSH command:
	I1028 11:12:17.472313  150723 main.go:141] libmachine: (ha-928358-m02) DBG | exit 0
	I1028 11:12:17.602076  150723 main.go:141] libmachine: (ha-928358-m02) DBG | SSH cmd err, output: <nil>: 
	I1028 11:12:17.602372  150723 main.go:141] libmachine: (ha-928358-m02) KVM machine creation complete!
	I1028 11:12:17.602744  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:17.603321  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603533  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:17.603697  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:12:17.603728  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetState
	I1028 11:12:17.605258  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:12:17.605275  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:12:17.605282  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:12:17.605291  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.607333  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607701  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.607721  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.607912  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.608143  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608313  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.608439  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.608583  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.608808  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.608820  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:12:17.721307  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:17.721336  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:12:17.721347  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.724798  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725194  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.725223  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.725409  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.725636  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.725966  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.726099  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.726262  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.726279  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:12:17.838473  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:12:17.838586  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:12:17.838602  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:12:17.838613  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.838892  150723 buildroot.go:166] provisioning hostname "ha-928358-m02"
	I1028 11:12:17.838917  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:17.839093  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.841883  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842317  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.842339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.842472  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.842669  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842831  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.842971  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.843156  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.843326  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.843338  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m02 && echo "ha-928358-m02" | sudo tee /etc/hostname
	I1028 11:12:17.968498  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m02
	
	I1028 11:12:17.968528  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:17.971246  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971623  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:17.971653  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:17.971818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:17.971988  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972158  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:17.972315  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:17.972474  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:17.972671  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:17.972693  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:12:18.095026  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:12:18.095079  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:12:18.095099  150723 buildroot.go:174] setting up certificates
	I1028 11:12:18.095111  150723 provision.go:84] configureAuth start
	I1028 11:12:18.095125  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetMachineName
	I1028 11:12:18.095406  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.098183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098549  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.098574  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.098726  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.100797  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101183  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.101209  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.101422  150723 provision.go:143] copyHostCerts
	I1028 11:12:18.101450  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101483  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:12:18.101493  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:12:18.101585  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:12:18.101707  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101736  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:12:18.101747  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:12:18.101792  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:12:18.101860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101880  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:12:18.101884  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:12:18.101906  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:12:18.101972  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m02 san=[127.0.0.1 192.168.39.15 ha-928358-m02 localhost minikube]
	I1028 11:12:18.196094  150723 provision.go:177] copyRemoteCerts
	I1028 11:12:18.196152  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:12:18.196173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.198995  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199315  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.199339  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.199521  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.199709  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.199854  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.199983  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.288841  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:12:18.288936  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:12:18.314840  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:12:18.314910  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:12:18.341393  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:12:18.341485  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:12:18.366854  150723 provision.go:87] duration metric: took 271.722974ms to configureAuth
	I1028 11:12:18.366893  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:12:18.367124  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:18.367212  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.370267  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.370639  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.370796  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.371029  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371173  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.371307  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.371456  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.371620  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.371634  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:12:18.612895  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:12:18.612923  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:12:18.612931  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetURL
	I1028 11:12:18.614354  150723 main.go:141] libmachine: (ha-928358-m02) DBG | Using libvirt version 6000000
	I1028 11:12:18.616667  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617056  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.617087  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.617192  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:12:18.617204  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:12:18.617212  150723 client.go:171] duration metric: took 26.86402649s to LocalClient.Create
	I1028 11:12:18.617234  150723 start.go:167] duration metric: took 26.864111247s to libmachine.API.Create "ha-928358"
	I1028 11:12:18.617248  150723 start.go:293] postStartSetup for "ha-928358-m02" (driver="kvm2")
	I1028 11:12:18.617264  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:12:18.617289  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.617583  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:12:18.617614  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.619991  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620293  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.620324  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.620465  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.620632  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.620807  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.620947  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.709453  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:12:18.714006  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:12:18.714050  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:12:18.714135  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:12:18.714212  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:12:18.714223  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:12:18.714317  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:12:18.725069  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:18.750381  150723 start.go:296] duration metric: took 133.112799ms for postStartSetup
	I1028 11:12:18.750443  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetConfigRaw
	I1028 11:12:18.751083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.753465  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.753830  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.753860  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.754104  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:12:18.754302  150723 start.go:128] duration metric: took 27.019366662s to createHost
	I1028 11:12:18.754324  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.756274  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756584  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.756606  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.756746  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.756928  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757083  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.757211  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.757395  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:12:18.757617  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1028 11:12:18.757632  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:12:18.870465  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730113938.848702185
	
	I1028 11:12:18.870492  150723 fix.go:216] guest clock: 1730113938.848702185
	I1028 11:12:18.870502  150723 fix.go:229] Guest: 2024-10-28 11:12:18.848702185 +0000 UTC Remote: 2024-10-28 11:12:18.754313813 +0000 UTC m=+79.331053022 (delta=94.388372ms)
	I1028 11:12:18.870523  150723 fix.go:200] guest clock delta is within tolerance: 94.388372ms
	I1028 11:12:18.870530  150723 start.go:83] releasing machines lock for "ha-928358-m02", held for 27.135687063s
	I1028 11:12:18.870557  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.870818  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:18.873499  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.873921  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.873952  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.876354  150723 out.go:177] * Found network options:
	I1028 11:12:18.877803  150723 out.go:177]   - NO_PROXY=192.168.39.206
	W1028 11:12:18.879297  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.879332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.879863  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880042  150723 main.go:141] libmachine: (ha-928358-m02) Calling .DriverName
	I1028 11:12:18.880145  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:12:18.880199  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	W1028 11:12:18.880223  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:12:18.880307  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:12:18.880332  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHHostname
	I1028 11:12:18.882741  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883009  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883032  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883152  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883178  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883365  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883531  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.883570  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:18.883597  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:18.883673  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:18.883773  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHPort
	I1028 11:12:18.883886  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHKeyPath
	I1028 11:12:18.883979  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetSSHUsername
	I1028 11:12:18.884097  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m02/id_rsa Username:docker}
	I1028 11:12:19.140607  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:12:19.146803  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:12:19.146880  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:12:19.163725  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:12:19.163760  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:12:19.163823  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:12:19.180717  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:12:19.195299  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:12:19.195367  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:12:19.209555  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:12:19.223597  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:12:19.345039  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:12:19.505186  150723 docker.go:233] disabling docker service ...
	I1028 11:12:19.505264  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:12:19.520570  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:12:19.534795  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:12:19.656005  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:12:19.777835  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:12:19.793076  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:12:19.813202  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:12:19.813275  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.824795  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:12:19.824878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.836376  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.847788  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.858444  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:12:19.869710  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.880881  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.900116  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:12:19.910944  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:12:19.921199  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:12:19.921284  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:12:19.936681  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:12:19.954317  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:20.080754  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:12:20.180414  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:12:20.180503  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:12:20.185906  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:12:20.185979  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:12:20.190133  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:12:20.233553  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:12:20.233626  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.262764  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:12:20.298972  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:12:20.300478  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:12:20.301810  150723 main.go:141] libmachine: (ha-928358-m02) Calling .GetIP
	I1028 11:12:20.304361  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304709  150723 main.go:141] libmachine: (ha-928358-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:70:28", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:12:07 +0000 UTC Type:0 Mac:52:54:00:6f:70:28 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-928358-m02 Clientid:01:52:54:00:6f:70:28}
	I1028 11:12:20.304731  150723 main.go:141] libmachine: (ha-928358-m02) DBG | domain ha-928358-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:6f:70:28 in network mk-ha-928358
	I1028 11:12:20.304901  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:12:20.309556  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:20.323672  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:12:20.323882  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:20.324235  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.324287  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.339013  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I1028 11:12:20.339463  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.340030  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.340052  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.340399  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.340615  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:12:20.342314  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:20.342631  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:20.342680  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:20.357539  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I1028 11:12:20.358002  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:20.358498  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:20.358519  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:20.359008  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:20.359212  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:20.359422  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.15
	I1028 11:12:20.359434  150723 certs.go:194] generating shared ca certs ...
	I1028 11:12:20.359450  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.359573  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:12:20.359614  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:12:20.359623  150723 certs.go:256] generating profile certs ...
	I1028 11:12:20.359689  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:12:20.359712  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94
	I1028 11:12:20.359727  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.254]
	I1028 11:12:20.442903  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 ...
	I1028 11:12:20.442934  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94: {Name:mk85a4e1a50b9026ab3d6dc4495b321bb7e02ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443115  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 ...
	I1028 11:12:20.443128  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94: {Name:mk7f773e25633de1a7b22c2c20b13ade22c5f211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:12:20.443202  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:12:20.443334  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.1139bf94 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:12:20.443463  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:12:20.443480  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:12:20.443493  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:12:20.443506  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:12:20.443519  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:12:20.443535  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:12:20.443547  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:12:20.443559  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:12:20.443571  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:12:20.443620  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:12:20.443647  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:12:20.443657  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:12:20.443683  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:12:20.443705  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:12:20.443728  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:12:20.443767  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:12:20.443793  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:20.443806  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:12:20.443820  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:12:20.443852  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:20.446971  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447376  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:20.447407  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:20.447537  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:20.447754  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:20.447909  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:20.448040  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:20.533935  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:12:20.540194  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:12:20.553555  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:12:20.558471  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:12:20.571472  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:12:20.576267  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:12:20.588003  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:12:20.593338  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:12:20.605038  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:12:20.609724  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:12:20.623742  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:12:20.628679  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:12:20.640341  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:12:20.667017  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:12:20.692744  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:12:20.718588  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:12:20.748034  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 11:12:20.775373  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:12:20.802947  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:12:20.831097  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:12:20.858123  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:12:20.882703  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:12:20.907628  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:12:20.933325  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:12:20.951380  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:12:20.970398  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:12:20.988118  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:12:21.006403  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:12:21.027746  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:12:21.046174  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:12:21.066465  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:12:21.072838  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:12:21.086541  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091618  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.091672  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:12:21.098303  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:12:21.110328  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:12:21.122629  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127701  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.127772  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:12:21.134271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:12:21.146879  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:12:21.159782  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165113  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.165173  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:12:21.171693  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:12:21.183939  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:12:21.188218  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:12:21.188285  150723 kubeadm.go:934] updating node {m02 192.168.39.15 8443 v1.31.2 crio true true} ...
	I1028 11:12:21.188380  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:12:21.188402  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:12:21.188440  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:12:21.207772  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:12:21.207836  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:12:21.207903  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.219161  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:12:21.219233  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:12:21.229788  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:12:21.229822  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229868  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1028 11:12:21.229883  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:12:21.229901  150723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1028 11:12:21.234643  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:12:21.234682  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:12:22.169217  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.169290  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:12:22.175155  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:12:22.175187  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:12:22.612156  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:12:22.630404  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.630517  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:12:22.635637  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:12:22.635690  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:12:22.984793  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:12:22.995829  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:12:23.014631  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:12:23.033132  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:12:23.051694  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:12:23.056057  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:12:23.069704  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:23.193632  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:23.213616  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:12:23.214094  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:12:23.214154  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:12:23.229467  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I1028 11:12:23.229946  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:12:23.230470  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:12:23.230493  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:12:23.230811  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:12:23.231005  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:12:23.231156  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:12:23.231250  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:12:23.231265  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:12:23.234605  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235105  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:12:23.235130  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:12:23.235484  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:12:23.235658  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:12:23.235817  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:12:23.235978  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:12:23.587402  150723 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:23.587450  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443"
	I1028 11:12:49.062311  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0up603.shgmvlsrpj1mebjg --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m02 --control-plane --apiserver-advertise-address=192.168.39.15 --apiserver-bind-port=8443": (25.474831461s)
	I1028 11:12:49.062358  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:12:49.750628  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m02 minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:12:49.901989  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:12:50.021163  150723 start.go:319] duration metric: took 26.789999674s to joinCluster
	I1028 11:12:50.021261  150723 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:12:50.021588  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:12:50.022686  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:12:50.024027  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:12:50.259666  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:12:50.294975  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:12:50.295261  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:12:50.295325  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:12:50.295539  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:12:50.295634  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.295644  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.295655  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.295661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.311123  150723 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1028 11:12:50.796718  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:50.796750  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:50.796761  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:50.796767  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:50.800704  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:51.296741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.296771  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.296783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.296789  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.301317  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:51.796429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:51.796461  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:51.796472  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:51.796479  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:51.902786  150723 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I1028 11:12:52.295866  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.295889  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.295896  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.295902  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.299707  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:52.300296  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:52.796802  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:52.796836  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:52.796848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:52.796854  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:52.801105  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:53.296430  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.296464  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.296476  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.296482  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.300401  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:53.796454  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:53.796475  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:53.796483  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:53.796487  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:53.800686  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:54.296632  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.296658  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.296669  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.296675  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.430413  150723 round_trippers.go:574] Response Status: 200 OK in 133 milliseconds
	I1028 11:12:54.431260  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:54.796228  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:54.796251  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:54.796260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:54.796297  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:54.799743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:55.295741  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.295769  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.295779  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.295784  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.300264  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:55.796141  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:55.796166  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:55.796177  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:55.796183  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:55.799984  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.296002  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.296025  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.296033  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.296038  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.299236  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:56.796285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:56.796327  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:56.796338  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:56.796343  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:56.801079  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:56.801722  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:57.295973  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.296010  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.296019  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.296022  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.300070  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:57.796110  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:57.796138  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:57.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:57.796156  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:57.800286  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:12:58.296657  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.296684  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.296694  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.296700  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.300601  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:58.795760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:58.795783  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:58.795791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:58.795795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:58.799253  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.296427  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.296448  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.296457  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.296461  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.300112  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:12:59.300577  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:12:59.795852  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:12:59.795874  150723 round_trippers.go:469] Request Headers:
	I1028 11:12:59.795882  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:12:59.795886  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:12:59.799187  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.296355  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.296376  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.296385  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.296388  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.300090  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:00.796212  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:00.796241  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:00.796250  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:00.796255  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:00.799643  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.296675  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.296698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.296706  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.296720  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.300506  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:01.300981  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:01.795747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:01.795781  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:01.795793  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:01.795800  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:01.799384  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.296561  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.296587  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.296595  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.296601  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.300227  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:02.796111  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:02.796139  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:02.796150  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:02.796175  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:02.799502  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.295908  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.295932  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.295940  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.295944  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.299608  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:03.796579  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:03.796602  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:03.796611  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:03.796615  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:03.801307  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:03.802803  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:04.296022  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.296047  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.296055  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.296058  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.300556  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:04.796471  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:04.796494  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:04.796502  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:04.796507  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:04.801460  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:05.296387  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.296409  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.296417  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.296422  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.299743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:05.796148  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:05.796171  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:05.796179  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:05.796184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:05.801488  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:06.296441  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.296475  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.296487  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.296492  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.300636  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:06.301140  150723 node_ready.go:53] node "ha-928358-m02" has status "Ready":"False"
	I1028 11:13:06.796015  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:06.796054  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:06.796067  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:06.796073  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:06.802178  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:07.295805  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.295832  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.295841  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.295845  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.300831  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:07.796368  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:07.796395  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:07.796407  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:07.796413  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:07.800287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.295819  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.295846  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.295856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.295862  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.303573  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:13:08.304813  150723 node_ready.go:49] node "ha-928358-m02" has status "Ready":"True"
	I1028 11:13:08.304842  150723 node_ready.go:38] duration metric: took 18.009284836s for node "ha-928358-m02" to be "Ready" ...
	I1028 11:13:08.304855  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:08.304964  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:08.304977  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.304986  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.304996  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.314253  150723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1028 11:13:08.322556  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.322661  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:13:08.322674  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.322686  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.322694  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.325598  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.326235  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.326251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.326262  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.326267  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.329653  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.330306  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.330330  150723 pod_ready.go:82] duration metric: took 7.745243ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330344  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.330420  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:13:08.330431  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.330443  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.330451  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.333854  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.334683  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.334698  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.334709  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.334717  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.338575  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.339125  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.339151  150723 pod_ready.go:82] duration metric: took 8.79493ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339166  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.339239  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:13:08.339251  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.339260  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.339266  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.342147  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.342887  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.342903  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.342914  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.342919  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.345586  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:13:08.346017  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.346037  150723 pod_ready.go:82] duration metric: took 6.859007ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346049  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.346126  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:13:08.346136  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.346149  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.346155  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.349837  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.350760  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:08.350776  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.350783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.350787  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.354111  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.354776  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.354797  150723 pod_ready.go:82] duration metric: took 8.74104ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.354818  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.496252  150723 request.go:632] Waited for 141.345028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:13:08.496320  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.496333  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.496338  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.500168  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.696151  150723 request.go:632] Waited for 195.353851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696219  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:08.696228  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.696240  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.696249  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.700151  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:08.701139  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:08.701160  150723 pod_ready.go:82] duration metric: took 346.331354ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.701174  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:08.896292  150723 request.go:632] Waited for 195.012978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896361  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:13:08.896371  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:08.896387  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:08.896396  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:08.900050  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.096401  150723 request.go:632] Waited for 195.396634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.096481  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.096489  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.096493  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.100986  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:09.101422  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.101442  150723 pod_ready.go:82] duration metric: took 400.258829ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.101456  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.296560  150723 request.go:632] Waited for 195.02851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296638  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:13:09.296643  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.296654  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.296672  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.300596  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.496746  150723 request.go:632] Waited for 195.271102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496832  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:09.496844  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.496856  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.496863  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.500375  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.501182  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.501208  150723 pod_ready.go:82] duration metric: took 399.742852ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.501223  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.696672  150723 request.go:632] Waited for 195.364831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696747  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:13:09.696753  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.696761  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.696765  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.700353  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.896500  150723 request.go:632] Waited for 195.402622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896557  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:09.896562  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:09.896570  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:09.896574  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:09.899876  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:09.900586  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:09.900606  150723 pod_ready.go:82] duration metric: took 399.370555ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:09.900621  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.096828  150723 request.go:632] Waited for 196.099526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096889  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:13:10.096895  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.096902  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.096907  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.100607  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.295935  150723 request.go:632] Waited for 194.296247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296028  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:10.296036  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.296047  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.296052  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.299514  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.299992  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.300013  150723 pod_ready.go:82] duration metric: took 399.384578ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.300033  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.496260  150723 request.go:632] Waited for 196.135494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496330  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:13:10.496339  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.496347  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.496352  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.500702  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:10.696747  150723 request.go:632] Waited for 195.398969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696828  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:10.696834  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.696842  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.696849  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.700510  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:10.701486  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:10.701505  150723 pod_ready.go:82] duration metric: took 401.465094ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.701515  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:10.896720  150723 request.go:632] Waited for 195.109133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896777  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:13:10.896783  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:10.896790  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:10.896795  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:10.900315  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.096400  150723 request.go:632] Waited for 195.36981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096478  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:13:11.096483  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.096493  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.096499  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.100065  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.100566  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.100590  150723 pod_ready.go:82] duration metric: took 399.065558ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.100600  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.296785  150723 request.go:632] Waited for 196.108788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296873  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:13:11.296881  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.296891  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.296896  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.300760  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:13:11.495907  150723 request.go:632] Waited for 194.292764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.495994  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:13:11.496001  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.496011  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.496021  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.500420  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.500960  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:13:11.500979  150723 pod_ready.go:82] duration metric: took 400.371324ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:13:11.500991  150723 pod_ready.go:39] duration metric: took 3.196117998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:13:11.501012  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:13:11.501071  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:13:11.518775  150723 api_server.go:72] duration metric: took 21.497464525s to wait for apiserver process to appear ...
	I1028 11:13:11.518811  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:13:11.518839  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:13:11.523103  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:13:11.523168  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:13:11.523173  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.523180  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.523189  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.524064  150723 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1028 11:13:11.524163  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:13:11.524189  150723 api_server.go:131] duration metric: took 5.370992ms to wait for apiserver health ...
	I1028 11:13:11.524197  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:13:11.696656  150723 request.go:632] Waited for 172.384226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696727  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:11.696733  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.696740  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.696744  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.702489  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:13:11.707749  150723 system_pods.go:59] 17 kube-system pods found
	I1028 11:13:11.707791  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:11.707798  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:11.707802  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:11.707805  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:11.707808  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:11.707812  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:11.707815  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:11.707818  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:11.707821  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:11.707824  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:11.707828  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:11.707831  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:11.707833  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:11.707837  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:11.707840  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:11.707843  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:11.707847  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:11.707852  150723 system_pods.go:74] duration metric: took 183.650264ms to wait for pod list to return data ...
	I1028 11:13:11.707863  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:13:11.895935  150723 request.go:632] Waited for 187.997842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895992  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:13:11.895997  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:11.896004  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:11.896009  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:11.900031  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:11.900269  150723 default_sa.go:45] found service account: "default"
	I1028 11:13:11.900286  150723 default_sa.go:55] duration metric: took 192.416558ms for default service account to be created ...
	I1028 11:13:11.900298  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:13:12.096570  150723 request.go:632] Waited for 196.184771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096668  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:13:12.096678  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.096690  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.096703  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.102990  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:13:12.107971  150723 system_pods.go:86] 17 kube-system pods found
	I1028 11:13:12.108008  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:13:12.108017  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:13:12.108022  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:13:12.108027  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:13:12.108032  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:13:12.108037  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:13:12.108044  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:13:12.108051  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:13:12.108056  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:13:12.108062  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:13:12.108067  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:13:12.108072  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:13:12.108076  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:13:12.108082  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:13:12.108088  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:13:12.108094  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:13:12.108101  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:13:12.108116  150723 system_pods.go:126] duration metric: took 207.810112ms to wait for k8s-apps to be running ...
	I1028 11:13:12.108138  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:13:12.108196  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:12.125765  150723 system_svc.go:56] duration metric: took 17.59726ms WaitForService to wait for kubelet
	I1028 11:13:12.125805  150723 kubeadm.go:582] duration metric: took 22.104503497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:13:12.125835  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:13:12.296271  150723 request.go:632] Waited for 170.346607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296352  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:13:12.296358  150723 round_trippers.go:469] Request Headers:
	I1028 11:13:12.296365  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:13:12.296370  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:13:12.301322  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:13:12.302235  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302261  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302297  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:13:12.302303  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:13:12.302310  150723 node_conditions.go:105] duration metric: took 176.469824ms to run NodePressure ...
	I1028 11:13:12.302331  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:13:12.302371  150723 start.go:255] writing updated cluster config ...
	I1028 11:13:12.304722  150723 out.go:201] 
	I1028 11:13:12.306493  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:12.306595  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.308496  150723 out.go:177] * Starting "ha-928358-m03" control-plane node in "ha-928358" cluster
	I1028 11:13:12.310210  150723 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:13:12.310234  150723 cache.go:56] Caching tarball of preloaded images
	I1028 11:13:12.310336  150723 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:13:12.310347  150723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:13:12.310430  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:12.310601  150723 start.go:360] acquireMachinesLock for ha-928358-m03: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:13:12.310642  150723 start.go:364] duration metric: took 22.061µs to acquireMachinesLock for "ha-928358-m03"
	I1028 11:13:12.310662  150723 start.go:93] Provisioning new machine with config: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:12.310748  150723 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1028 11:13:12.312443  150723 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 11:13:12.312555  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:12.312596  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:12.327768  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I1028 11:13:12.328249  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:12.328745  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:12.328765  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:12.329102  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:12.329311  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:12.329448  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:12.329611  150723 start.go:159] libmachine.API.Create for "ha-928358" (driver="kvm2")
	I1028 11:13:12.329642  150723 client.go:168] LocalClient.Create starting
	I1028 11:13:12.329670  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 11:13:12.329703  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329720  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329768  150723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 11:13:12.329788  150723 main.go:141] libmachine: Decoding PEM data...
	I1028 11:13:12.329799  150723 main.go:141] libmachine: Parsing certificate...
	I1028 11:13:12.329815  150723 main.go:141] libmachine: Running pre-create checks...
	I1028 11:13:12.329826  150723 main.go:141] libmachine: (ha-928358-m03) Calling .PreCreateCheck
	I1028 11:13:12.329995  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:12.330372  150723 main.go:141] libmachine: Creating machine...
	I1028 11:13:12.330386  150723 main.go:141] libmachine: (ha-928358-m03) Calling .Create
	I1028 11:13:12.330528  150723 main.go:141] libmachine: (ha-928358-m03) Creating KVM machine...
	I1028 11:13:12.331834  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing default KVM network
	I1028 11:13:12.332000  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found existing private KVM network mk-ha-928358
	I1028 11:13:12.332124  150723 main.go:141] libmachine: (ha-928358-m03) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.332140  150723 main.go:141] libmachine: (ha-928358-m03) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 11:13:12.332221  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.332127  151534 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.332333  150723 main.go:141] libmachine: (ha-928358-m03) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 11:13:12.597391  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.597227  151534 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa...
	I1028 11:13:12.699922  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699777  151534 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk...
	I1028 11:13:12.699960  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing magic tar header
	I1028 11:13:12.699975  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Writing SSH key tar header
	I1028 11:13:12.699986  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:12.699933  151534 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 ...
	I1028 11:13:12.700170  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03 (perms=drwx------)
	I1028 11:13:12.700205  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 11:13:12.700218  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03
	I1028 11:13:12.700232  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 11:13:12.700244  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 11:13:12.700258  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 11:13:12.700271  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:13:12.700287  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 11:13:12.700300  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 11:13:12.700313  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 11:13:12.700325  150723 main.go:141] libmachine: (ha-928358-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 11:13:12.700339  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:12.700363  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home/jenkins
	I1028 11:13:12.700371  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Checking permissions on dir: /home
	I1028 11:13:12.700395  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Skipping /home - not owner
	I1028 11:13:12.701297  150723 main.go:141] libmachine: (ha-928358-m03) define libvirt domain using xml: 
	I1028 11:13:12.701328  150723 main.go:141] libmachine: (ha-928358-m03) <domain type='kvm'>
	I1028 11:13:12.701339  150723 main.go:141] libmachine: (ha-928358-m03)   <name>ha-928358-m03</name>
	I1028 11:13:12.701346  150723 main.go:141] libmachine: (ha-928358-m03)   <memory unit='MiB'>2200</memory>
	I1028 11:13:12.701358  150723 main.go:141] libmachine: (ha-928358-m03)   <vcpu>2</vcpu>
	I1028 11:13:12.701364  150723 main.go:141] libmachine: (ha-928358-m03)   <features>
	I1028 11:13:12.701373  150723 main.go:141] libmachine: (ha-928358-m03)     <acpi/>
	I1028 11:13:12.701383  150723 main.go:141] libmachine: (ha-928358-m03)     <apic/>
	I1028 11:13:12.701391  150723 main.go:141] libmachine: (ha-928358-m03)     <pae/>
	I1028 11:13:12.701404  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701415  150723 main.go:141] libmachine: (ha-928358-m03)   </features>
	I1028 11:13:12.701423  150723 main.go:141] libmachine: (ha-928358-m03)   <cpu mode='host-passthrough'>
	I1028 11:13:12.701433  150723 main.go:141] libmachine: (ha-928358-m03)   
	I1028 11:13:12.701445  150723 main.go:141] libmachine: (ha-928358-m03)   </cpu>
	I1028 11:13:12.701456  150723 main.go:141] libmachine: (ha-928358-m03)   <os>
	I1028 11:13:12.701463  150723 main.go:141] libmachine: (ha-928358-m03)     <type>hvm</type>
	I1028 11:13:12.701472  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='cdrom'/>
	I1028 11:13:12.701478  150723 main.go:141] libmachine: (ha-928358-m03)     <boot dev='hd'/>
	I1028 11:13:12.701513  150723 main.go:141] libmachine: (ha-928358-m03)     <bootmenu enable='no'/>
	I1028 11:13:12.701555  150723 main.go:141] libmachine: (ha-928358-m03)   </os>
	I1028 11:13:12.701565  150723 main.go:141] libmachine: (ha-928358-m03)   <devices>
	I1028 11:13:12.701573  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='cdrom'>
	I1028 11:13:12.701585  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/boot2docker.iso'/>
	I1028 11:13:12.701593  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hdc' bus='scsi'/>
	I1028 11:13:12.701600  150723 main.go:141] libmachine: (ha-928358-m03)       <readonly/>
	I1028 11:13:12.701607  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701622  150723 main.go:141] libmachine: (ha-928358-m03)     <disk type='file' device='disk'>
	I1028 11:13:12.701635  150723 main.go:141] libmachine: (ha-928358-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 11:13:12.701651  150723 main.go:141] libmachine: (ha-928358-m03)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/ha-928358-m03.rawdisk'/>
	I1028 11:13:12.701662  150723 main.go:141] libmachine: (ha-928358-m03)       <target dev='hda' bus='virtio'/>
	I1028 11:13:12.701673  150723 main.go:141] libmachine: (ha-928358-m03)     </disk>
	I1028 11:13:12.701683  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701717  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='mk-ha-928358'/>
	I1028 11:13:12.701741  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701754  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701765  150723 main.go:141] libmachine: (ha-928358-m03)     <interface type='network'>
	I1028 11:13:12.701776  150723 main.go:141] libmachine: (ha-928358-m03)       <source network='default'/>
	I1028 11:13:12.701787  150723 main.go:141] libmachine: (ha-928358-m03)       <model type='virtio'/>
	I1028 11:13:12.701800  150723 main.go:141] libmachine: (ha-928358-m03)     </interface>
	I1028 11:13:12.701809  150723 main.go:141] libmachine: (ha-928358-m03)     <serial type='pty'>
	I1028 11:13:12.701821  150723 main.go:141] libmachine: (ha-928358-m03)       <target port='0'/>
	I1028 11:13:12.701833  150723 main.go:141] libmachine: (ha-928358-m03)     </serial>
	I1028 11:13:12.701844  150723 main.go:141] libmachine: (ha-928358-m03)     <console type='pty'>
	I1028 11:13:12.701855  150723 main.go:141] libmachine: (ha-928358-m03)       <target type='serial' port='0'/>
	I1028 11:13:12.701866  150723 main.go:141] libmachine: (ha-928358-m03)     </console>
	I1028 11:13:12.701874  150723 main.go:141] libmachine: (ha-928358-m03)     <rng model='virtio'>
	I1028 11:13:12.701883  150723 main.go:141] libmachine: (ha-928358-m03)       <backend model='random'>/dev/random</backend>
	I1028 11:13:12.701898  150723 main.go:141] libmachine: (ha-928358-m03)     </rng>
	I1028 11:13:12.701909  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701917  150723 main.go:141] libmachine: (ha-928358-m03)     
	I1028 11:13:12.701927  150723 main.go:141] libmachine: (ha-928358-m03)   </devices>
	I1028 11:13:12.701935  150723 main.go:141] libmachine: (ha-928358-m03) </domain>
	I1028 11:13:12.701944  150723 main.go:141] libmachine: (ha-928358-m03) 
	I1028 11:13:12.709093  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:b5:fb:00 in network default
	I1028 11:13:12.709827  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring networks are active...
	I1028 11:13:12.709849  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:12.710555  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network default is active
	I1028 11:13:12.710786  150723 main.go:141] libmachine: (ha-928358-m03) Ensuring network mk-ha-928358 is active
	I1028 11:13:12.711115  150723 main.go:141] libmachine: (ha-928358-m03) Getting domain xml...
	I1028 11:13:12.711807  150723 main.go:141] libmachine: (ha-928358-m03) Creating domain...
	I1028 11:13:13.995752  150723 main.go:141] libmachine: (ha-928358-m03) Waiting to get IP...
	I1028 11:13:13.996563  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:13.997045  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:13.997085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:13.997018  151534 retry.go:31] will retry after 234.151571ms: waiting for machine to come up
	I1028 11:13:14.232519  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.233064  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.233096  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.232999  151534 retry.go:31] will retry after 249.582339ms: waiting for machine to come up
	I1028 11:13:14.484383  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.484878  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.484915  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.484812  151534 retry.go:31] will retry after 409.553215ms: waiting for machine to come up
	I1028 11:13:14.896380  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:14.896855  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:14.896887  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:14.896797  151534 retry.go:31] will retry after 412.085621ms: waiting for machine to come up
	I1028 11:13:15.310086  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.310769  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.310799  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.310719  151534 retry.go:31] will retry after 651.315136ms: waiting for machine to come up
	I1028 11:13:15.963589  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:15.964049  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:15.964078  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:15.963990  151534 retry.go:31] will retry after 936.522294ms: waiting for machine to come up
	I1028 11:13:16.902173  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:16.902668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:16.902689  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:16.902618  151534 retry.go:31] will retry after 774.455135ms: waiting for machine to come up
	I1028 11:13:17.679023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:17.679574  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:17.679600  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:17.679540  151534 retry.go:31] will retry after 1.069131352s: waiting for machine to come up
	I1028 11:13:18.750780  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:18.751352  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:18.751375  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:18.751284  151534 retry.go:31] will retry after 1.587573663s: waiting for machine to come up
	I1028 11:13:20.340206  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:20.340612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:20.340643  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:20.340566  151534 retry.go:31] will retry after 1.424108777s: waiting for machine to come up
	I1028 11:13:21.766872  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:21.767376  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:21.767397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:21.767337  151534 retry.go:31] will retry after 1.867673803s: waiting for machine to come up
	I1028 11:13:23.637608  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:23.638075  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:23.638103  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:23.638049  151534 retry.go:31] will retry after 3.385284423s: waiting for machine to come up
	I1028 11:13:27.027812  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:27.028397  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:27.028423  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:27.028342  151534 retry.go:31] will retry after 4.143137357s: waiting for machine to come up
	I1028 11:13:31.174612  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:31.174990  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find current IP address of domain ha-928358-m03 in network mk-ha-928358
	I1028 11:13:31.175020  150723 main.go:141] libmachine: (ha-928358-m03) DBG | I1028 11:13:31.174951  151534 retry.go:31] will retry after 3.870983412s: waiting for machine to come up
	I1028 11:13:35.049044  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049668  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has current primary IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.049716  150723 main.go:141] libmachine: (ha-928358-m03) Found IP for machine: 192.168.39.44
	I1028 11:13:35.049734  150723 main.go:141] libmachine: (ha-928358-m03) Reserving static IP address...
	I1028 11:13:35.050296  150723 main.go:141] libmachine: (ha-928358-m03) DBG | unable to find host DHCP lease matching {name: "ha-928358-m03", mac: "52:54:00:7e:d3:f9", ip: "192.168.39.44"} in network mk-ha-928358
	I1028 11:13:35.126256  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Getting to WaitForSSH function...
	I1028 11:13:35.126303  150723 main.go:141] libmachine: (ha-928358-m03) Reserved static IP address: 192.168.39.44
	I1028 11:13:35.126318  150723 main.go:141] libmachine: (ha-928358-m03) Waiting for SSH to be available...
	I1028 11:13:35.128851  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129272  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.129315  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.129446  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH client type: external
	I1028 11:13:35.129476  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa (-rw-------)
	I1028 11:13:35.129507  150723 main.go:141] libmachine: (ha-928358-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:13:35.129520  150723 main.go:141] libmachine: (ha-928358-m03) DBG | About to run SSH command:
	I1028 11:13:35.129564  150723 main.go:141] libmachine: (ha-928358-m03) DBG | exit 0
	I1028 11:13:35.253921  150723 main.go:141] libmachine: (ha-928358-m03) DBG | SSH cmd err, output: <nil>: 
	I1028 11:13:35.254211  150723 main.go:141] libmachine: (ha-928358-m03) KVM machine creation complete!
	I1028 11:13:35.254512  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:35.255052  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255255  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:35.255399  150723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 11:13:35.255411  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetState
	I1028 11:13:35.256908  150723 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 11:13:35.256921  150723 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 11:13:35.256927  150723 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 11:13:35.256932  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.259735  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260211  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.260237  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.260436  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.260625  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260784  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.260899  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.261057  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.261307  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.261321  150723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 11:13:35.360859  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.360890  150723 main.go:141] libmachine: Detecting the provisioner...
	I1028 11:13:35.360902  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.364454  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.364848  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.364904  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.365213  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.365431  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365607  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.365742  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.365932  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.366116  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.366130  150723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 11:13:35.470987  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 11:13:35.471094  150723 main.go:141] libmachine: found compatible host: buildroot
	I1028 11:13:35.471109  150723 main.go:141] libmachine: Provisioning with buildroot...
	I1028 11:13:35.471120  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471399  150723 buildroot.go:166] provisioning hostname "ha-928358-m03"
	I1028 11:13:35.471424  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.471622  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.474085  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474509  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.474542  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.474681  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.474871  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475021  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.475156  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.475305  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.475494  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.475510  150723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358-m03 && echo "ha-928358-m03" | sudo tee /etc/hostname
	I1028 11:13:35.593400  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358-m03
	
	I1028 11:13:35.593429  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.596415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596740  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.596767  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.596962  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.597183  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597361  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.597490  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.597704  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:35.597875  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:35.597892  150723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:13:35.715751  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:13:35.715791  150723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:13:35.715811  150723 buildroot.go:174] setting up certificates
	I1028 11:13:35.715821  150723 provision.go:84] configureAuth start
	I1028 11:13:35.715834  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetMachineName
	I1028 11:13:35.716106  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:35.718868  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719187  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.719219  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.719354  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.721477  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721760  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.721790  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.721917  150723 provision.go:143] copyHostCerts
	I1028 11:13:35.721979  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722032  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:13:35.722044  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:13:35.722140  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:13:35.722245  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722278  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:13:35.722289  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:13:35.722332  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:13:35.722402  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722429  150723 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:13:35.722435  150723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:13:35.722459  150723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:13:35.722531  150723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358-m03 san=[127.0.0.1 192.168.39.44 ha-928358-m03 localhost minikube]
	I1028 11:13:35.825404  150723 provision.go:177] copyRemoteCerts
	I1028 11:13:35.825459  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:13:35.825483  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:35.828415  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:35.828803  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:35.828972  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:35.829151  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:35.829337  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:35.829485  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:35.913472  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:13:35.913575  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:13:35.940828  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:13:35.940904  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 11:13:35.968009  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:13:35.968078  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:13:35.997592  150723 provision.go:87] duration metric: took 281.755193ms to configureAuth
	I1028 11:13:35.997618  150723 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:13:35.997801  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:35.997869  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.000450  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.000935  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.000970  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.001165  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.001385  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.001734  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.001893  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.002062  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.002076  150723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:13:36.221329  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:13:36.221364  150723 main.go:141] libmachine: Checking connection to Docker...
	I1028 11:13:36.221433  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetURL
	I1028 11:13:36.222571  150723 main.go:141] libmachine: (ha-928358-m03) DBG | Using libvirt version 6000000
	I1028 11:13:36.224781  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225156  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.225179  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.225329  150723 main.go:141] libmachine: Docker is up and running!
	I1028 11:13:36.225344  150723 main.go:141] libmachine: Reticulating splines...
	I1028 11:13:36.225353  150723 client.go:171] duration metric: took 23.895703285s to LocalClient.Create
	I1028 11:13:36.225379  150723 start.go:167] duration metric: took 23.895771231s to libmachine.API.Create "ha-928358"
	I1028 11:13:36.225390  150723 start.go:293] postStartSetup for "ha-928358-m03" (driver="kvm2")
	I1028 11:13:36.225399  150723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:13:36.225413  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.225669  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:13:36.225696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.227681  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.227995  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.228023  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.228147  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.228314  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.228474  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.228601  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.313594  150723 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:13:36.318443  150723 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:13:36.318477  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:13:36.318544  150723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:13:36.318614  150723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:13:36.318624  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:13:36.318705  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:13:36.330227  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:36.357995  150723 start.go:296] duration metric: took 132.588764ms for postStartSetup
	I1028 11:13:36.358059  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetConfigRaw
	I1028 11:13:36.358728  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.361773  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362238  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.362267  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.362589  150723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:13:36.362828  150723 start.go:128] duration metric: took 24.052057424s to createHost
	I1028 11:13:36.362855  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.365684  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.365985  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.366016  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.366211  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.366426  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366575  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.366696  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.366842  150723 main.go:141] libmachine: Using SSH client type: native
	I1028 11:13:36.367055  150723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1028 11:13:36.367079  150723 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:13:36.470814  150723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114016.442636655
	
	I1028 11:13:36.470843  150723 fix.go:216] guest clock: 1730114016.442636655
	I1028 11:13:36.470853  150723 fix.go:229] Guest: 2024-10-28 11:13:36.442636655 +0000 UTC Remote: 2024-10-28 11:13:36.362843133 +0000 UTC m=+156.939582341 (delta=79.793522ms)
	I1028 11:13:36.470869  150723 fix.go:200] guest clock delta is within tolerance: 79.793522ms
	I1028 11:13:36.470874  150723 start.go:83] releasing machines lock for "ha-928358-m03", held for 24.160222671s
	I1028 11:13:36.470894  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.471174  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:36.473802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.474314  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.474345  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.476703  150723 out.go:177] * Found network options:
	I1028 11:13:36.478253  150723 out.go:177]   - NO_PROXY=192.168.39.206,192.168.39.15
	W1028 11:13:36.479492  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.479516  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.479532  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480171  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480372  150723 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:13:36.480474  150723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:13:36.480516  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	W1028 11:13:36.480627  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	W1028 11:13:36.480648  150723 proxy.go:119] fail to check proxy env: Error ip not in block
	I1028 11:13:36.480710  150723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:13:36.480733  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:13:36.483390  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483597  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483802  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.483836  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.483976  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484137  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484152  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:36.484171  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:36.484240  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484323  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:13:36.484392  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.484441  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:13:36.484542  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:13:36.484643  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:13:36.722609  150723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:13:36.728895  150723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:13:36.728959  150723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:13:36.746783  150723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:13:36.746814  150723 start.go:495] detecting cgroup driver to use...
	I1028 11:13:36.746889  150723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:13:36.764176  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:13:36.780539  150723 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:13:36.780611  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:13:36.795323  150723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:13:36.811733  150723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:13:36.943649  150723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:13:37.116480  150723 docker.go:233] disabling docker service ...
	I1028 11:13:37.116541  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:13:37.131848  150723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:13:37.146207  150723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:13:37.271760  150723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:13:37.397315  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:13:37.413150  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:13:37.433193  150723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:13:37.433274  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.448784  150723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:13:37.448861  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.461820  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.474878  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.487273  150723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:13:37.500384  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.513109  150723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.533296  150723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:13:37.546472  150723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:13:37.557495  150723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:13:37.557598  150723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:13:37.573136  150723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:13:37.584661  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:37.701023  150723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:13:37.798120  150723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:13:37.798207  150723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:13:37.803954  150723 start.go:563] Will wait 60s for crictl version
	I1028 11:13:37.804021  150723 ssh_runner.go:195] Run: which crictl
	I1028 11:13:37.808938  150723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:13:37.851814  150723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:13:37.851905  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.881347  150723 ssh_runner.go:195] Run: crio --version
	I1028 11:13:37.916129  150723 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:13:37.917503  150723 out.go:177]   - env NO_PROXY=192.168.39.206
	I1028 11:13:37.918841  150723 out.go:177]   - env NO_PROXY=192.168.39.206,192.168.39.15
	I1028 11:13:37.920060  150723 main.go:141] libmachine: (ha-928358-m03) Calling .GetIP
	I1028 11:13:37.923080  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923530  150723 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:13:37.923560  150723 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:13:37.923801  150723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:13:37.928489  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:37.944276  150723 mustload.go:65] Loading cluster: ha-928358
	I1028 11:13:37.944540  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:13:37.944876  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.944917  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.960868  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I1028 11:13:37.961448  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.961978  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.962000  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.962320  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.962554  150723 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:13:37.964176  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:37.964500  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:37.964546  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:37.980099  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I1028 11:13:37.980536  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:37.980994  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:37.981027  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:37.981316  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:37.981476  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:37.981636  150723 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.44
	I1028 11:13:37.981649  150723 certs.go:194] generating shared ca certs ...
	I1028 11:13:37.981667  150723 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:37.981815  150723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:13:37.981867  150723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:13:37.981880  150723 certs.go:256] generating profile certs ...
	I1028 11:13:37.981981  150723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:13:37.982024  150723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408
	I1028 11:13:37.982045  150723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.44 192.168.39.254]
	I1028 11:13:38.031818  150723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 ...
	I1028 11:13:38.031849  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408: {Name:mk24630c498d89b32162095507c0812c854412bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032046  150723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 ...
	I1028 11:13:38.032062  150723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408: {Name:mk38f2fd390923bb1dfc386b88fc31f22cbd1405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:13:38.032164  150723 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:13:38.032326  150723 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.dd2c2408 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:13:38.032501  150723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:13:38.032524  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:13:38.032548  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:13:38.032568  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:13:38.032585  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:13:38.032605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:13:38.032622  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:13:38.032641  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:13:38.045605  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:13:38.045699  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:13:38.045758  150723 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:13:38.045774  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:13:38.045809  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:13:38.045836  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:13:38.045857  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:13:38.045912  150723 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:13:38.045950  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.045974  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.045992  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.046044  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:38.049011  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049464  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:38.049485  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:38.049679  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:38.049889  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:38.050031  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:38.050163  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:38.129875  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1028 11:13:38.135272  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1028 11:13:38.146812  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1028 11:13:38.151195  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1028 11:13:38.162579  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1028 11:13:38.167018  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1028 11:13:38.178835  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1028 11:13:38.183162  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1028 11:13:38.195172  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1028 11:13:38.199929  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1028 11:13:38.212017  150723 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1028 11:13:38.216559  150723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1028 11:13:38.228337  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:13:38.256831  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:13:38.282349  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:13:38.312381  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:13:38.340368  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1028 11:13:38.368852  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:13:38.396585  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:13:38.425195  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:13:38.453101  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:13:38.479115  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:13:38.505463  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:13:38.531445  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1028 11:13:38.550676  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1028 11:13:38.570134  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1028 11:13:38.588413  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1028 11:13:38.606756  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1028 11:13:38.626726  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1028 11:13:38.646275  150723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1028 11:13:38.665976  150723 ssh_runner.go:195] Run: openssl version
	I1028 11:13:38.672176  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:13:38.685017  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690136  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.690209  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:13:38.697711  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:13:38.712239  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:13:38.725832  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730869  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.730941  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:13:38.737271  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:13:38.751047  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:13:38.763980  150723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769518  150723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.769615  150723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:13:38.776609  150723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:13:38.791196  150723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:13:38.796201  150723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 11:13:38.796261  150723 kubeadm.go:934] updating node {m03 192.168.39.44 8443 v1.31.2 crio true true} ...
	I1028 11:13:38.796362  150723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:13:38.796397  150723 kube-vip.go:115] generating kube-vip config ...
	I1028 11:13:38.796470  150723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:13:38.817160  150723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:13:38.817224  150723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:13:38.817279  150723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.829712  150723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1028 11:13:38.829765  150723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1028 11:13:38.842596  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1028 11:13:38.842645  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1028 11:13:38.842708  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1028 11:13:38.842755  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:13:38.842602  150723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1028 11:13:38.842821  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.842886  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1028 11:13:38.849835  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1028 11:13:38.849867  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1028 11:13:38.850062  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1028 11:13:38.850096  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1028 11:13:38.869860  150723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:38.870019  150723 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1028 11:13:39.008547  150723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1028 11:13:39.008597  150723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1028 11:13:39.841044  150723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1028 11:13:39.851424  150723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1028 11:13:39.870537  150723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:13:39.890208  150723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:13:39.908650  150723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:13:39.913130  150723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:13:39.926430  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:13:40.057322  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:13:40.076284  150723 host.go:66] Checking if "ha-928358" exists ...
	I1028 11:13:40.076669  150723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:13:40.076716  150723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:13:40.094065  150723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I1028 11:13:40.094505  150723 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:13:40.095080  150723 main.go:141] libmachine: Using API Version  1
	I1028 11:13:40.095109  150723 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:13:40.095526  150723 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:13:40.095722  150723 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:13:40.095896  150723 start.go:317] joinCluster: &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:13:40.096063  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1028 11:13:40.096090  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:13:40.099282  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.099834  150723 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:13:40.099865  150723 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:13:40.100013  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:13:40.100216  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:13:40.100410  150723 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:13:40.100563  150723 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:13:40.273359  150723 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:13:40.273397  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443"
	I1028 11:14:04.540358  150723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a413hq.qk9z79cdsin0pfn9 --discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-928358-m03 --control-plane --apiserver-advertise-address=192.168.39.44 --apiserver-bind-port=8443": (24.266932187s)
	I1028 11:14:04.540403  150723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1028 11:14:05.110298  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-928358-m03 minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=ha-928358 minikube.k8s.io/primary=false
	I1028 11:14:05.258236  150723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-928358-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1028 11:14:05.400029  150723 start.go:319] duration metric: took 25.304126551s to joinCluster
	I1028 11:14:05.400118  150723 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:14:05.400571  150723 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:14:05.401586  150723 out.go:177] * Verifying Kubernetes components...
	I1028 11:14:05.403593  150723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:14:05.647217  150723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:14:05.664862  150723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:14:05.665098  150723 kapi.go:59] client config for ha-928358: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1028 11:14:05.665166  150723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.206:8443
	I1028 11:14:05.665399  150723 node_ready.go:35] waiting up to 6m0s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:05.665469  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:05.665476  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:05.665484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:05.665490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:05.669744  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.165968  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.165997  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.166009  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.166016  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.170123  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:06.666317  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:06.666416  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:06.666445  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:06.666462  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:06.670843  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:07.165728  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.165755  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.165768  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.165776  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.169304  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.666123  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:07.666154  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:07.666165  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:07.666171  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:07.669713  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:07.670892  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:08.166009  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.166031  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.166039  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.166043  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.169692  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:08.666389  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:08.666423  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:08.666436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:08.666446  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:08.671535  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:09.166494  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.166518  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.166530  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.166537  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.170858  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.665722  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:09.665745  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:09.665753  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:09.665762  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:09.670170  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:09.671084  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:10.165695  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.165724  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.165735  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.165742  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.173147  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:10.666401  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:10.666429  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:10.666440  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:10.666443  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:10.671830  150723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1028 11:14:11.165701  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.165722  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.165731  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.165737  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.228148  150723 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I1028 11:14:11.666333  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:11.666388  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:11.666401  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:11.666408  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:11.670186  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:11.671264  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:12.165684  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.165709  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.165715  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.165719  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.170052  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:12.666466  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:12.666494  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:12.666504  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:12.666509  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:12.670352  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:13.166382  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.166410  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.166421  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.166427  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.171235  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:13.666623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:13.666647  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:13.666656  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:13.666661  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:13.670621  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.165740  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.165767  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.165783  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.169178  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:14.170214  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:14.666184  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:14.666206  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:14.666215  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:14.666219  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:14.670466  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:15.166232  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.166261  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.166272  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.166276  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.173444  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:15.666306  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:15.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:15.666344  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:15.666348  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:15.670385  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:16.166429  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.166461  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.166474  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.166481  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.170181  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:16.170699  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:16.665698  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:16.665723  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:16.665730  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:16.665734  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:16.669776  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:17.165640  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.165664  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.165672  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.165676  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.169368  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:17.666177  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:17.666202  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:17.666210  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:17.666214  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:17.670134  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.165917  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.165940  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.165948  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.165952  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.169496  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.665925  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:18.665949  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:18.665971  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:18.665976  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:18.669433  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:18.670970  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:19.165694  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.165718  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.165728  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.165732  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.170437  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:19.666095  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:19.666123  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:19.666134  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:19.666141  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:19.668970  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:20.166291  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.166314  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.166322  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.166326  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.170016  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:20.665789  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:20.665815  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:20.665822  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:20.665827  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:20.669287  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.165826  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.165853  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.165862  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.165868  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.169651  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:21.170332  150723 node_ready.go:53] node "ha-928358-m03" has status "Ready":"False"
	I1028 11:14:21.665771  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:21.665804  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:21.665816  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:21.665822  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:21.669841  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:22.166380  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.166406  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.166414  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.166420  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.169816  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:22.666341  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:22.666364  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:22.666372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:22.666377  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:22.670923  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.165737  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.165762  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.165771  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.165776  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.169299  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.665765  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:23.665789  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.665797  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.665801  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.669697  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.670619  150723 node_ready.go:49] node "ha-928358-m03" has status "Ready":"True"
	I1028 11:14:23.670643  150723 node_ready.go:38] duration metric: took 18.005227415s for node "ha-928358-m03" to be "Ready" ...
	I1028 11:14:23.670662  150723 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:23.670813  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:23.670845  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.670858  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.670875  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.677257  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:23.683895  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.683990  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-gnm9r
	I1028 11:14:23.683999  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.684007  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.684011  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688327  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:23.688931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.688948  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.688956  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.688960  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.691787  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.692523  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.692543  150723 pod_ready.go:82] duration metric: took 8.61912ms for pod "coredns-7c65d6cfc9-gnm9r" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692554  150723 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.692624  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xxxgw
	I1028 11:14:23.692632  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.692639  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.692645  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.695738  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:23.696515  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.696533  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.696542  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.696548  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.699472  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.700068  150723 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.700097  150723 pod_ready.go:82] duration metric: took 7.535535ms for pod "coredns-7c65d6cfc9-xxxgw" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700107  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.700162  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358
	I1028 11:14:23.700171  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.700178  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.700184  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.702917  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.703534  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:23.703550  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.703559  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.703566  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.706103  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.706650  150723 pod_ready.go:93] pod "etcd-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.706674  150723 pod_ready.go:82] duration metric: took 6.560031ms for pod "etcd-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706686  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.706758  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m02
	I1028 11:14:23.706768  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.706778  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.706785  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.709373  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.710451  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:23.710472  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.710484  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.710490  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.713376  150723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1028 11:14:23.713980  150723 pod_ready.go:93] pod "etcd-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:23.714010  150723 pod_ready.go:82] duration metric: took 7.313443ms for pod "etcd-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.714024  150723 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:23.866359  150723 request.go:632] Waited for 152.224049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866476  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-928358-m03
	I1028 11:14:23.866492  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:23.866504  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:23.866516  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:23.871166  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.066273  150723 request.go:632] Waited for 194.358951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066350  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:24.066361  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.066372  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.066378  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.070313  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.071003  150723 pod_ready.go:93] pod "etcd-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.071021  150723 pod_ready.go:82] duration metric: took 356.990267ms for pod "etcd-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.071039  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.266224  150723 request.go:632] Waited for 195.110039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266285  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358
	I1028 11:14:24.266290  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.266298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.266303  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.271102  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.466777  150723 request.go:632] Waited for 195.051662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466835  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:24.466840  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.466848  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.466857  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.471602  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:24.472438  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.472458  150723 pod_ready.go:82] duration metric: took 401.411661ms for pod "kube-apiserver-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.472468  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.666245  150723 request.go:632] Waited for 193.688569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666314  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m02
	I1028 11:14:24.666321  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.666332  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.666337  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.670192  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.866165  150723 request.go:632] Waited for 195.218003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866225  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:24.866230  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:24.866237  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:24.866242  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:24.869696  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:24.870520  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:24.870539  150723 pod_ready.go:82] duration metric: took 398.065091ms for pod "kube-apiserver-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:24.870549  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.066723  150723 request.go:632] Waited for 196.090526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066790  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-928358-m03
	I1028 11:14:25.066796  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.066812  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.066818  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.070840  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:25.266492  150723 request.go:632] Waited for 194.408437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266550  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:25.266555  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.266563  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.266567  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.270440  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.271647  150723 pod_ready.go:93] pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.271668  150723 pod_ready.go:82] duration metric: took 401.112731ms for pod "kube-apiserver-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.271677  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.466686  150723 request.go:632] Waited for 194.942796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466776  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358
	I1028 11:14:25.466782  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.466791  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.466799  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.478807  150723 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1028 11:14:25.666227  150723 request.go:632] Waited for 186.359371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666322  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:25.666335  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.666346  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.666355  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.669950  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:25.670691  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:25.670710  150723 pod_ready.go:82] duration metric: took 399.026254ms for pod "kube-controller-manager-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.670723  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:25.866724  150723 request.go:632] Waited for 195.936368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866801  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m02
	I1028 11:14:25.866807  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:25.866814  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:25.866819  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:25.870640  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.065827  150723 request.go:632] Waited for 194.310294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065907  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:26.065912  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.065920  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.065925  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.069699  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.070459  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.070478  150723 pod_ready.go:82] duration metric: took 399.749253ms for pod "kube-controller-manager-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.070489  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.266701  150723 request.go:632] Waited for 196.138179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266792  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-928358-m03
	I1028 11:14:26.266809  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.266825  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.266832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.270679  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.466081  150723 request.go:632] Waited for 194.361983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466174  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:26.466182  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.466194  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.466206  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.470252  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:26.470784  150723 pod_ready.go:93] pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.470804  150723 pod_ready.go:82] duration metric: took 400.309396ms for pod "kube-controller-manager-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.470815  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.665844  150723 request.go:632] Waited for 194.95975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665902  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8fxdn
	I1028 11:14:26.665925  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.665956  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.665963  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.669385  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.866618  150723 request.go:632] Waited for 196.393847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866674  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:26.866679  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:26.866687  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:26.866690  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:26.870012  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:26.870701  150723 pod_ready.go:93] pod "kube-proxy-8fxdn" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:26.870720  150723 pod_ready.go:82] duration metric: took 399.898606ms for pod "kube-proxy-8fxdn" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:26.870734  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.065775  150723 request.go:632] Waited for 194.965869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065845  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cfhp5
	I1028 11:14:27.065850  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.065858  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.065865  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.069945  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.266078  150723 request.go:632] Waited for 195.378208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266154  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:27.266159  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.266167  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.266174  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.269961  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:27.270605  150723 pod_ready.go:93] pod "kube-proxy-cfhp5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.270625  150723 pod_ready.go:82] duration metric: took 399.882701ms for pod "kube-proxy-cfhp5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.270640  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.466435  150723 request.go:632] Waited for 195.719587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466503  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-np8x5
	I1028 11:14:27.466511  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.466550  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.466562  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.473780  150723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1028 11:14:27.666214  150723 request.go:632] Waited for 191.347069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666284  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:27.666291  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.666298  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.666302  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.670820  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:27.671554  150723 pod_ready.go:93] pod "kube-proxy-np8x5" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:27.671578  150723 pod_ready.go:82] duration metric: took 400.929643ms for pod "kube-proxy-np8x5" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.671589  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:27.866741  150723 request.go:632] Waited for 195.08002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866814  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358
	I1028 11:14:27.866821  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:27.866832  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:27.866843  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:27.870682  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.066337  150723 request.go:632] Waited for 194.812157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066403  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358
	I1028 11:14:28.066408  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.066416  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.066420  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.069743  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.070462  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.070483  150723 pod_ready.go:82] duration metric: took 398.887712ms for pod "kube-scheduler-ha-928358" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.070497  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.265961  150723 request.go:632] Waited for 195.392733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266039  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m02
	I1028 11:14:28.266047  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.266057  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.266088  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.269740  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.465851  150723 request.go:632] Waited for 195.318291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465931  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m02
	I1028 11:14:28.465937  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.465949  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.465957  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.470812  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:28.471696  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.471720  150723 pod_ready.go:82] duration metric: took 401.210524ms for pod "kube-scheduler-ha-928358-m02" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.471733  150723 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.665763  150723 request.go:632] Waited for 193.940561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665854  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-928358-m03
	I1028 11:14:28.665869  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.665877  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.665883  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.669746  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.866768  150723 request.go:632] Waited for 196.382736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866827  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/ha-928358-m03
	I1028 11:14:28.866832  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.866840  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.866844  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.870665  150723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1028 11:14:28.871107  150723 pod_ready.go:93] pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace has status "Ready":"True"
	I1028 11:14:28.871125  150723 pod_ready.go:82] duration metric: took 399.382061ms for pod "kube-scheduler-ha-928358-m03" in "kube-system" namespace to be "Ready" ...
	I1028 11:14:28.871136  150723 pod_ready.go:39] duration metric: took 5.200463354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:14:28.871154  150723 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:14:28.871205  150723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:14:28.894991  150723 api_server.go:72] duration metric: took 23.494825881s to wait for apiserver process to appear ...
	I1028 11:14:28.895029  150723 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:14:28.895053  150723 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1028 11:14:28.901769  150723 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1028 11:14:28.901850  150723 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1028 11:14:28.901857  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:28.901868  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:28.901879  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:28.903049  150723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1028 11:14:28.903133  150723 api_server.go:141] control plane version: v1.31.2
	I1028 11:14:28.903153  150723 api_server.go:131] duration metric: took 8.11544ms to wait for apiserver health ...
	I1028 11:14:28.903164  150723 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:14:29.066557  150723 request.go:632] Waited for 163.310035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066623  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.066628  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.066650  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.066657  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.073405  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.079996  150723 system_pods.go:59] 24 kube-system pods found
	I1028 11:14:29.080029  150723 system_pods.go:61] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.080039  150723 system_pods.go:61] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.080043  150723 system_pods.go:61] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.080047  150723 system_pods.go:61] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.080050  150723 system_pods.go:61] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.080053  150723 system_pods.go:61] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.080056  150723 system_pods.go:61] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.080062  150723 system_pods.go:61] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.080065  150723 system_pods.go:61] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.080068  150723 system_pods.go:61] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.080071  150723 system_pods.go:61] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.080075  150723 system_pods.go:61] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.080079  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.080085  150723 system_pods.go:61] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.080089  150723 system_pods.go:61] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.080094  150723 system_pods.go:61] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.080099  150723 system_pods.go:61] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.080103  150723 system_pods.go:61] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.080109  150723 system_pods.go:61] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.080117  150723 system_pods.go:61] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.080124  150723 system_pods.go:61] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.080135  150723 system_pods.go:61] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.080139  150723 system_pods.go:61] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.080142  150723 system_pods.go:61] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.080148  150723 system_pods.go:74] duration metric: took 176.977613ms to wait for pod list to return data ...
	I1028 11:14:29.080159  150723 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:14:29.266599  150723 request.go:632] Waited for 186.363794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266653  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1028 11:14:29.266658  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.266665  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.266669  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.271060  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.271213  150723 default_sa.go:45] found service account: "default"
	I1028 11:14:29.271235  150723 default_sa.go:55] duration metric: took 191.069027ms for default service account to be created ...
	I1028 11:14:29.271247  150723 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:14:29.466315  150723 request.go:632] Waited for 194.981882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466408  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1028 11:14:29.466421  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.466436  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.466448  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.472918  150723 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1028 11:14:29.481266  150723 system_pods.go:86] 24 kube-system pods found
	I1028 11:14:29.481302  150723 system_pods.go:89] "coredns-7c65d6cfc9-gnm9r" [a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01] Running
	I1028 11:14:29.481308  150723 system_pods.go:89] "coredns-7c65d6cfc9-xxxgw" [6a07f06b-45fb-48df-a2a2-11a778f673f9] Running
	I1028 11:14:29.481312  150723 system_pods.go:89] "etcd-ha-928358" [c681dcb6-bc71-46d6-aa5a-899417abb849] Running
	I1028 11:14:29.481316  150723 system_pods.go:89] "etcd-ha-928358-m02" [af3a82df-4742-489f-a117-9b9d2b7a048d] Running
	I1028 11:14:29.481320  150723 system_pods.go:89] "etcd-ha-928358-m03" [56e4453a-65fd-4b3f-9556-e5cec7aa0400] Running
	I1028 11:14:29.481324  150723 system_pods.go:89] "kindnet-9k2mz" [946ea25c-8bc6-46d5-9804-7d8f75ba2ad4] Running
	I1028 11:14:29.481327  150723 system_pods.go:89] "kindnet-j4vj5" [ac0a5c2f-3377-4bd0-9f94-1a7537c53166] Running
	I1028 11:14:29.481330  150723 system_pods.go:89] "kindnet-pq9gp" [2ea8de0e-a664-4adb-aec2-6f98508540c6] Running
	I1028 11:14:29.481333  150723 system_pods.go:89] "kube-apiserver-ha-928358" [a788332b-962c-46b4-82b4-a02964f5d5dc] Running
	I1028 11:14:29.481336  150723 system_pods.go:89] "kube-apiserver-ha-928358-m02" [c92501e8-c1d5-477e-a6b6-b60620faa00b] Running
	I1028 11:14:29.481339  150723 system_pods.go:89] "kube-apiserver-ha-928358-m03" [b5e63feb-e15c-42f4-8e49-9775a7602add] Running
	I1028 11:14:29.481343  150723 system_pods.go:89] "kube-controller-manager-ha-928358" [78c19f2e-d681-4e3c-9bfd-727274401f78] Running
	I1028 11:14:29.481346  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m02" [cd1072fd-cd9a-4a94-9781-02d871c3600c] Running
	I1028 11:14:29.481350  150723 system_pods.go:89] "kube-controller-manager-ha-928358-m03" [ad543df1-fd1e-4fbe-b70b-06af7d39f971] Running
	I1028 11:14:29.481354  150723 system_pods.go:89] "kube-proxy-8fxdn" [7b2e1e84-6129-4868-b46b-525da3cdf687] Running
	I1028 11:14:29.481359  150723 system_pods.go:89] "kube-proxy-cfhp5" [475c34b6-f766-4f80-80ed-035648e85112] Running
	I1028 11:14:29.481362  150723 system_pods.go:89] "kube-proxy-np8x5" [c8dd1d78-2375-49d4-b476-ec52dd65830b] Running
	I1028 11:14:29.481364  150723 system_pods.go:89] "kube-scheduler-ha-928358" [d3a05763-fa71-4bdd-9c05-de0bc7249a3c] Running
	I1028 11:14:29.481368  150723 system_pods.go:89] "kube-scheduler-ha-928358-m02" [2d391e9c-b6c4-4f4d-823b-f9413b79db5c] Running
	I1028 11:14:29.481372  150723 system_pods.go:89] "kube-scheduler-ha-928358-m03" [b9809d8d-8a45-4363-9b03-55995deb6b62] Running
	I1028 11:14:29.481378  150723 system_pods.go:89] "kube-vip-ha-928358" [3441ce6b-3a50-44ba-b0c7-6f7c869cf62c] Running
	I1028 11:14:29.481382  150723 system_pods.go:89] "kube-vip-ha-928358-m02" [82ba5bf8-b053-47f5-bb71-e7cbc2e17cce] Running
	I1028 11:14:29.481388  150723 system_pods.go:89] "kube-vip-ha-928358-m03" [894e8b21-2ffc-4ad5-89b1-80c915aecfb9] Running
	I1028 11:14:29.481392  150723 system_pods.go:89] "storage-provisioner" [84b302cf-9f88-4a96-aa61-c2ca6512e060] Running
	I1028 11:14:29.481402  150723 system_pods.go:126] duration metric: took 210.146699ms to wait for k8s-apps to be running ...
	I1028 11:14:29.481415  150723 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:14:29.481478  150723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:14:29.499294  150723 system_svc.go:56] duration metric: took 17.867458ms WaitForService to wait for kubelet
	I1028 11:14:29.499345  150723 kubeadm.go:582] duration metric: took 24.099188581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:14:29.499369  150723 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:14:29.666183  150723 request.go:632] Waited for 166.698659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666244  150723 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1028 11:14:29.666250  150723 round_trippers.go:469] Request Headers:
	I1028 11:14:29.666258  150723 round_trippers.go:473]     Accept: application/json, */*
	I1028 11:14:29.666262  150723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1028 11:14:29.670701  150723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1028 11:14:29.671840  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671859  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671869  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671873  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671877  150723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:14:29.671880  150723 node_conditions.go:123] node cpu capacity is 2
	I1028 11:14:29.671883  150723 node_conditions.go:105] duration metric: took 172.509467ms to run NodePressure ...
	I1028 11:14:29.671895  150723 start.go:241] waiting for startup goroutines ...
	I1028 11:14:29.671914  150723 start.go:255] writing updated cluster config ...
	I1028 11:14:29.672186  150723 ssh_runner.go:195] Run: rm -f paused
	I1028 11:14:29.727881  150723 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 11:14:29.729936  150723 out.go:177] * Done! kubectl is now configured to use "ha-928358" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.574933090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114311574910584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4887f0d9-2b9d-4eb8-9113-baf298620a86 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.575696653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=839cb379-4a28-4b6c-9999-165918535912 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.575771923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=839cb379-4a28-4b6c-9999-165918535912 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.576110686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=839cb379-4a28-4b6c-9999-165918535912 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.625963695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c2d398f-cd59-42c8-8aaa-5b225ca16562 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.626150637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c2d398f-cd59-42c8-8aaa-5b225ca16562 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.628431571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c849d9ca-2969-4749-b017-8915ba8c690f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.629438664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114311629399979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c849d9ca-2969-4749-b017-8915ba8c690f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.633199023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c848d7cf-d3b3-400b-aed8-cf39b4c3d01a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.633427453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c848d7cf-d3b3-400b-aed8-cf39b4c3d01a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.634175915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c848d7cf-d3b3-400b-aed8-cf39b4c3d01a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.679057748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45c07f3f-4d9d-4b54-bec5-0dc540f2d007 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.679155428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45c07f3f-4d9d-4b54-bec5-0dc540f2d007 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.681598116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e27e925-2674-420d-8162-a46bab1895c1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.682296157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114311682259759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e27e925-2674-420d-8162-a46bab1895c1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.682877305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bcb8354-3e2b-4109-9ca7-b1721d028e4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.682932218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bcb8354-3e2b-4109-9ca7-b1721d028e4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.683270998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bcb8354-3e2b-4109-9ca7-b1721d028e4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.722148201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fe3e4c6-3814-4a2c-a0d6-a386f8eae4f2 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.722219709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fe3e4c6-3814-4a2c-a0d6-a386f8eae4f2 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.723566382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2622f858-5485-467c-b64f-f2667ca16177 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.723974473Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114311723953427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2622f858-5485-467c-b64f-f2667ca16177 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.724777683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=330d6919-6ad3-4147-b317-54cfa28997db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.724833055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=330d6919-6ad3-4147-b317-54cfa28997db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:18:31 ha-928358 crio[664]: time="2024-10-28 11:18:31.725399032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:678eb45e28d220a42bcac7415e4ce88f279076917f7b00e8a2b2f5c0cf677326,PodSandboxId:6fcf4a6026d957f8e79ffb85c32669ca9a365d9fc8214b6870229799cc4597fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1730114074570850330,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-dnw8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c810197-a557-46ef-b357-7e291a4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134,PodSandboxId:554c79cdc22b78e5a6470ffe34ccc5341e3ec6d3ffa7565e4c1beb707ef72e2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923426706643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gnm9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c7c6f0-1cdd-49c0-b778-5b0ec0ac1e01,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962,PodSandboxId:b55f959c9e26e991249ae94c2e429ff30afd02c35968b1f5197daf1f675d1e58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730113923397679448,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxxgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6a07f06b-45fb-48df-a2a2-11a778f673f9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101876df5ba495ae26f6192a67cf58daa6b5c8bd168e8cd3628351b01c68e901,PodSandboxId:cc9b8c60752926d41d28a4dcdfd1e19ef0de8ebc75555078fd1e376da4565e85,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1730113923367404788,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84b302cf-9f88-4a96-aa61-c2ca6512e060,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a,PodSandboxId:af0a9858b9f50c8e572c045e8c3126c8f9283cd7d8585ccdb4e89bb943d42fb5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CONTAINER_RUNNING,CreatedAt:17301139
10769943117,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pq9gp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea8de0e-a664-4adb-aec2-6f98508540c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7,PodSandboxId:f07333184a00738bbc04d8dc353d893f533ce87973bf71385f6653f9284a2e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730113910534253818,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fxdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b2e1e84-6129-4868-b46b-525da3cdf687,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653,PodSandboxId:aef8ad820f7339156e3cf807c79fad8a3b024b9e79758024addbf1b76722e750,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:016cfb97b0edc5f5faf96fd4671670e2ee0c2263308214ac118721a81729592d,State:CONTAINER_RUNNING,CreatedAt:1730113902093737215,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f0454183202822eaaf9dce289e7ab0,},Annotations:map[string]string{io.kubernetes.container.hash: f5549532,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854,PodSandboxId:841e8a03bb9b36d587566c629f28b86c338761184f55756297ccbf2c4bd32471,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730113899215696227,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c6aafad1b68cb8667c9a27dc935b2f4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52,PodSandboxId:1975c249cdfeea490e8715feb43cef751e0e345b84fbcabb3e80886b24a36158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730113899175911994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ad239d10939bdcd9fa6b3f4d3a18685,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef,PodSandboxId:2efa4330e0881e7fbc78ae172c7a9787884c589309ac84dc1ba6af39d4932b8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730113899117563670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-928358,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3ddb9faad874d83f5a9c68c563fb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583,PodSandboxId:041b17e002580a4be2c849c0bf7c03947a7914737165687390c2f455aea5b083,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730113899086865409,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-928358,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d5e9725d6fffac64bd660c7f6042f6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=330d6919-6ad3-4147-b317-54cfa28997db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	678eb45e28d22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   6fcf4a6026d95       busybox-7dff88458-dnw8z
	267b822906895       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   554c79cdc22b7       coredns-7c65d6cfc9-gnm9r
	0ec81022134ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b55f959c9e26e       coredns-7c65d6cfc9-xxxgw
	101876df5ba49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   cc9b8c6075292       storage-provisioner
	93fda9ea564e1       3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52                                      6 minutes ago       Running             kindnet-cni               0                   af0a9858b9f50       kindnet-pq9gp
	6af78d85866c9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f07333184a007       kube-proxy-8fxdn
	b4500f47684e6       ghcr.io/kube-vip/kube-vip@sha256:3742a655001d24c4ec1a4da019fdbacb1699cd42d40e34753848fb6b0c8b5215     6 minutes ago       Running             kube-vip                  0                   aef8ad820f733       kube-vip-ha-928358
	a75ab3d16aba2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   841e8a03bb9b3       etcd-ha-928358
	f8221151573cf       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   1975c249cdfee       kube-apiserver-ha-928358
	e735b7e201a7d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   2efa4330e0881       kube-controller-manager-ha-928358
	1be8f3556358e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   041b17e002580       kube-scheduler-ha-928358
	
	
	==> coredns [0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962] <==
	[INFO] 10.244.2.2:54221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001644473s
	[INFO] 10.244.2.2:58493 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00055293s
	[INFO] 10.244.1.2:59466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000373197s
	[INFO] 10.244.1.2:59196 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002135371s
	[INFO] 10.244.0.4:48789 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140504s
	[INFO] 10.244.0.4:43613 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168237s
	[INFO] 10.244.0.4:38143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.016935286s
	[INFO] 10.244.0.4:39110 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000177298s
	[INFO] 10.244.2.2:46780 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169863s
	[INFO] 10.244.2.2:56782 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002009621s
	[INFO] 10.244.2.2:39525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138628s
	[INFO] 10.244.2.2:53832 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000216458s
	[INFO] 10.244.1.2:39727 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000226061s
	[INFO] 10.244.1.2:60944 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001495416s
	[INFO] 10.244.1.2:36506 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119701s
	[INFO] 10.244.1.2:59657 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001674s
	[INFO] 10.244.0.4:50368 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178977s
	[INFO] 10.244.0.4:47562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089999s
	[INFO] 10.244.1.2:44983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013645s
	[INFO] 10.244.1.2:33581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000164661s
	[INFO] 10.244.1.2:39245 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099456s
	[INFO] 10.244.0.4:48286 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018935s
	[INFO] 10.244.0.4:33651 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000163132s
	[INFO] 10.244.2.2:57361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144876s
	[INFO] 10.244.2.2:38124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021886s
	
	
	==> coredns [267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134] <==
	[INFO] 10.244.0.4:46197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168175s
	[INFO] 10.244.0.4:43404 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138086s
	[INFO] 10.244.2.2:42078 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211245s
	[INFO] 10.244.2.2:43818 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001478975s
	[INFO] 10.244.2.2:36869 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148567s
	[INFO] 10.244.2.2:38696 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110904s
	[INFO] 10.244.1.2:53013 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000625096s
	[INFO] 10.244.1.2:57247 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002184098s
	[INFO] 10.244.1.2:60298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097712s
	[INFO] 10.244.1.2:42104 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099517s
	[INFO] 10.244.0.4:43344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166235s
	[INFO] 10.244.0.4:39756 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110369s
	[INFO] 10.244.2.2:51568 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132969s
	[INFO] 10.244.2.2:39038 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106245s
	[INFO] 10.244.2.2:36223 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090887s
	[INFO] 10.244.2.2:53817 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077711s
	[INFO] 10.244.1.2:45611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112879s
	[INFO] 10.244.0.4:48292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126001s
	[INFO] 10.244.0.4:49134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000314244s
	[INFO] 10.244.2.2:38137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166744s
	[INFO] 10.244.2.2:49391 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218881s
	[INFO] 10.244.1.2:58619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152475s
	[INFO] 10.244.1.2:59879 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000283359s
	[INFO] 10.244.1.2:33696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103786s
	[INFO] 10.244.1.2:41150 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120227s
	
	
	==> describe nodes <==
	Name:               ha-928358
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_11_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:12:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    ha-928358
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3063a9eb16b941929fe95ea9deb85942
	  System UUID:                3063a9eb-16b9-4192-9fe9-5ea9deb85942
	  Boot ID:                    4750ce27-a752-459c-82e1-f46d3ba9e4fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dnw8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-7c65d6cfc9-gnm9r             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m42s
	  kube-system                 coredns-7c65d6cfc9-xxxgw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m42s
	  kube-system                 etcd-ha-928358                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m47s
	  kube-system                 kindnet-pq9gp                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m42s
	  kube-system                 kube-apiserver-ha-928358             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-controller-manager-ha-928358    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-proxy-8fxdn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-scheduler-ha-928358             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-vip-ha-928358                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m40s  kube-proxy       
	  Normal  Starting                 6m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m47s  kubelet          Node ha-928358 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s  kubelet          Node ha-928358 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s  kubelet          Node ha-928358 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m43s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  NodeReady                6m30s  kubelet          Node ha-928358 status is now: NodeReady
	  Normal  RegisteredNode           5m37s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	  Normal  RegisteredNode           4m22s  node-controller  Node ha-928358 event: Registered Node ha-928358 in Controller
	
	
	Name:               ha-928358-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_12_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:12:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:15:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 28 Oct 2024 11:14:49 +0000   Mon, 28 Oct 2024 11:16:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-928358-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb0972414207466c8358559557f25b09
	  System UUID:                fb097241-4207-466c-8358-559557f25b09
	  Boot ID:                    69b9f603-4134-42b4-a3f9-eeae845c3c91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tx5tk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-ha-928358-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m44s
	  kube-system                 kindnet-j4vj5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m46s
	  kube-system                 kube-apiserver-ha-928358-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-controller-manager-ha-928358-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-proxy-cfhp5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-scheduler-ha-928358-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-928358-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m46s (x8 over 5m46s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s (x8 over 5m46s)  kubelet          Node ha-928358-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x7 over 5m46s)  kubelet          Node ha-928358-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-928358-m02 event: Registered Node ha-928358-m02 in Controller
	  Normal  NodeNotReady             118s                   node-controller  Node ha-928358-m02 status is now: NodeNotReady
	
	
	Name:               ha-928358-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_14_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:02 +0000   Mon, 28 Oct 2024 11:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    ha-928358-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebf69c3934784b66bc2bf05f458d71ba
	  System UUID:                ebf69c39-3478-4b66-bc2b-f05f458d71ba
	  Boot ID:                    2e5043ad-620d-4233-b866-677c45434de6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8ctp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-ha-928358-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m30s
	  kube-system                 kindnet-9k2mz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m32s
	  kube-system                 kube-apiserver-ha-928358-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-ha-928358-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-np8x5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-scheduler-ha-928358-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-vip-ha-928358-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node ha-928358-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x7 over 4m32s)  kubelet          Node ha-928358-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-928358-m03 event: Registered Node ha-928358-m03 in Controller
	
	
	Name:               ha-928358-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-928358-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=ha-928358
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_28T11_15_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:15:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-928358-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:15:40 +0000   Mon, 28 Oct 2024 11:15:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    ha-928358-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ee6c88b1c8c4fa2aebbfe4047465ead
	  System UUID:                6ee6c88b-1c8c-4fa2-aebb-fe4047465ead
	  Boot ID:                    b70ab214-29c9-4d90-9700-0ff1df9971f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k2ddr       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m23s
	  kube-system                 kube-proxy-fl4b7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node ha-928358-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node ha-928358-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-928358-m04 event: Registered Node ha-928358-m04 in Controller
	  Normal  NodeReady                3m1s                   kubelet          Node ha-928358-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct28 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053627] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.945749] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.924544] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.657378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.658005] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.063082] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059947] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.199848] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.133132] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.303491] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.303698] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.055659] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.938074] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +1.148998] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.072047] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.087002] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.352589] kauditd_printk_skb: 21 callbacks suppressed
	[Oct28 11:12] kauditd_printk_skb: 38 callbacks suppressed
	[ +49.929447] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854] <==
	{"level":"warn","ts":"2024-10-28T11:18:31.994958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:31.999943Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.013323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.017831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.029732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.037359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.044844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.050226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.056227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.067931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.100206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.108446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.129279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.141274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.149894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.160376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.167963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.175630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.182428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.187044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.193079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.199929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.200351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.207199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-28T11:18:32.208410Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8d50a8842d8d7ae5","from":"8d50a8842d8d7ae5","remote-peer-id":"ed53c0c0114aeaee","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:32 up 7 min,  0 users,  load average: 0.50, 0.51, 0.28
	Linux ha-928358 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a] <==
	I1028 11:18:02.315935       1 main.go:300] handling current node
	I1028 11:18:12.318153       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:12.318184       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:12.318402       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:12.318430       1 main.go:300] handling current node
	I1028 11:18:12.318441       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:12.318446       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:12.318605       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:12.318645       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:22.308838       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:22.308947       1 main.go:300] handling current node
	I1028 11:18:22.308976       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:22.309061       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	I1028 11:18:22.309300       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:22.309333       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:22.309462       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:22.309499       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:32.311258       1 main.go:296] Handling node with IPs: map[192.168.39.44:{}]
	I1028 11:18:32.311327       1 main.go:323] Node ha-928358-m03 has CIDR [10.244.2.0/24] 
	I1028 11:18:32.311802       1 main.go:296] Handling node with IPs: map[192.168.39.203:{}]
	I1028 11:18:32.311823       1 main.go:323] Node ha-928358-m04 has CIDR [10.244.3.0/24] 
	I1028 11:18:32.312207       1 main.go:296] Handling node with IPs: map[192.168.39.206:{}]
	I1028 11:18:32.312223       1 main.go:300] handling current node
	I1028 11:18:32.312249       1 main.go:296] Handling node with IPs: map[192.168.39.15:{}]
	I1028 11:18:32.312256       1 main.go:323] Node ha-928358-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52] <==
	I1028 11:11:44.249575       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1028 11:11:44.264324       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1028 11:11:44.266721       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 11:11:44.273696       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1028 11:11:44.441833       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 11:11:45.375393       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 11:11:45.401215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1028 11:11:45.422922       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 11:11:50.040543       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1028 11:11:50.160325       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1028 11:14:35.737044       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49680: use of closed network connection
	E1028 11:14:35.939412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49710: use of closed network connection
	E1028 11:14:36.137760       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49736: use of closed network connection
	E1028 11:14:36.353242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49742: use of closed network connection
	E1028 11:14:36.573304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49764: use of closed network connection
	E1028 11:14:36.795811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49780: use of closed network connection
	E1028 11:14:36.981176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49798: use of closed network connection
	E1028 11:14:37.177919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49830: use of closed network connection
	E1028 11:14:37.363976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49844: use of closed network connection
	E1028 11:14:37.667823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49884: use of closed network connection
	E1028 11:14:37.860879       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49906: use of closed network connection
	E1028 11:14:38.044254       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49922: use of closed network connection
	E1028 11:14:38.230562       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49930: use of closed network connection
	E1028 11:14:38.433175       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49954: use of closed network connection
	E1028 11:14:38.620514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49974: use of closed network connection
	
	
	==> kube-controller-manager [e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef] <==
	I1028 11:15:02.129745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m03"
	E1028 11:15:09.422518       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8k978 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8k978\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1028 11:15:09.795491       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-928358-m04\" does not exist"
	I1028 11:15:09.833650       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-928358-m04" podCIDRs=["10.244.3.0/24"]
	I1028 11:15:09.833720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:09.833754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.048409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.186481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:10.510390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.501689       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-928358-m04"
	I1028 11:15:14.502311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:14.708709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:20.001285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.204169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:15:31.204768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:31.224821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:34.519983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:15:40.626763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m04"
	I1028 11:16:34.553439       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-928358-m04"
	I1028 11:16:34.556249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.585375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:34.698936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.004399ms"
	I1028 11:16:34.699212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.024µs"
	I1028 11:16:35.153194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	I1028 11:16:39.778629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-928358-m02"
	
	
	==> kube-proxy [6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 11:11:50.898284       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 11:11:50.922359       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E1028 11:11:50.922435       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 11:11:51.064127       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 11:11:51.064169       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 11:11:51.064206       1 server_linux.go:169] "Using iptables Proxier"
	I1028 11:11:51.084457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 11:11:51.088588       1 server.go:483] "Version info" version="v1.31.2"
	I1028 11:11:51.088608       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:11:51.098854       1 config.go:199] "Starting service config controller"
	I1028 11:11:51.099108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 11:11:51.099342       1 config.go:328] "Starting node config controller"
	I1028 11:11:51.099355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 11:11:51.122226       1 config.go:105] "Starting endpoint slice config controller"
	I1028 11:11:51.122243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 11:11:51.199431       1 shared_informer.go:320] Caches are synced for node config
	I1028 11:11:51.199505       1 shared_informer.go:320] Caches are synced for service config
	I1028 11:11:51.222697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583] <==
	W1028 11:11:43.540244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.540296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.541960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 11:11:43.542068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.589795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 11:11:43.589913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.666909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.667067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.681223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 11:11:43.681426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.721299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 11:11:43.721931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 11:11:43.811114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 11:11:43.811345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 11:11:46.351113       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 11:15:09.905243       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.908212       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1733f64f-2a73-414c-a048-b4ad6b9bd117(kube-system/kindnet-k2ddr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k2ddr"
	E1028 11:15:09.910352       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k2ddr\": pod kindnet-k2ddr is already assigned to node \"ha-928358-m04\"" pod="kube-system/kindnet-k2ddr"
	I1028 11:15:09.910453       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k2ddr" node="ha-928358-m04"
	E1028 11:15:09.907070       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.910582       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 48c26642-8d42-43a1-ad06-ba9408499bf8(kube-system/kube-proxy-fl4b7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fl4b7"
	E1028 11:15:09.910623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fl4b7\": pod kube-proxy-fl4b7 is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-fl4b7"
	I1028 11:15:09.910661       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fl4b7" node="ha-928358-m04"
	E1028 11:15:09.930971       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tswkg" node="ha-928358-m04"
	E1028 11:15:09.931171       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tswkg\": pod kube-proxy-tswkg is already assigned to node \"ha-928358-m04\"" pod="kube-system/kube-proxy-tswkg"
	
	
	==> kubelet <==
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.514793    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:16:55 ha-928358 kubelet[1312]: E1028 11:16:55.515166    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114215514414818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.516628    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:05 ha-928358 kubelet[1312]: E1028 11:17:05.517193    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114225516360078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518657    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:15 ha-928358 kubelet[1312]: E1028 11:17:15.518678    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114235518443764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532318    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:25 ha-928358 kubelet[1312]: E1028 11:17:25.532805    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114245531090228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534490    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:35 ha-928358 kubelet[1312]: E1028 11:17:35.534569    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114255534180329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.349514    1312 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 11:17:45 ha-928358 kubelet[1312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 11:17:45 ha-928358 kubelet[1312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536867    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:45 ha-928358 kubelet[1312]: E1028 11:17:45.536910    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114265536656122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539160    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:17:55 ha-928358 kubelet[1312]: E1028 11:17:55.539208    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114275538681035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540899    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:05 ha-928358 kubelet[1312]: E1028 11:18:05.540940    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114285540540832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:15 ha-928358 kubelet[1312]: E1028 11:18:15.543044    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114295542712895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:15 ha-928358 kubelet[1312]: E1028 11:18:15.543124    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114295542712895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:25 ha-928358 kubelet[1312]: E1028 11:18:25.544764    1312 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305544540799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 11:18:25 ha-928358 kubelet[1312]: E1028 11:18:25.544789    1312 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730114305544540799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146320,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-928358 -n ha-928358
helpers_test.go:261: (dbg) Run:  kubectl --context ha-928358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (413.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-928358 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-928358 -v=7 --alsologtostderr
E1028 11:20:09.886900  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-928358 -v=7 --alsologtostderr: exit status 82 (2m1.975489886s)

                                                
                                                
-- stdout --
	* Stopping node "ha-928358-m04"  ...
	* Stopping node "ha-928358-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:18:33.364152  156473 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:18:33.364298  156473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:18:33.364309  156473 out.go:358] Setting ErrFile to fd 2...
	I1028 11:18:33.364314  156473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:18:33.364524  156473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:18:33.364785  156473 out.go:352] Setting JSON to false
	I1028 11:18:33.364893  156473 mustload.go:65] Loading cluster: ha-928358
	I1028 11:18:33.365338  156473 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:18:33.365440  156473 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:18:33.365665  156473 mustload.go:65] Loading cluster: ha-928358
	I1028 11:18:33.365847  156473 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:18:33.365904  156473 stop.go:39] StopHost: ha-928358-m04
	I1028 11:18:33.366311  156473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:18:33.366401  156473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:18:33.382249  156473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I1028 11:18:33.382870  156473 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:18:33.383434  156473 main.go:141] libmachine: Using API Version  1
	I1028 11:18:33.383462  156473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:18:33.383806  156473 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:18:33.386575  156473 out.go:177] * Stopping node "ha-928358-m04"  ...
	I1028 11:18:33.388009  156473 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 11:18:33.388044  156473 main.go:141] libmachine: (ha-928358-m04) Calling .DriverName
	I1028 11:18:33.388318  156473 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 11:18:33.388352  156473 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHHostname
	I1028 11:18:33.391292  156473 main.go:141] libmachine: (ha-928358-m04) DBG | domain ha-928358-m04 has defined MAC address 52:54:00:6d:e8:c6 in network mk-ha-928358
	I1028 11:18:33.391791  156473 main.go:141] libmachine: (ha-928358-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:e8:c6", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:14:55 +0000 UTC Type:0 Mac:52:54:00:6d:e8:c6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-928358-m04 Clientid:01:52:54:00:6d:e8:c6}
	I1028 11:18:33.391841  156473 main.go:141] libmachine: (ha-928358-m04) DBG | domain ha-928358-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:6d:e8:c6 in network mk-ha-928358
	I1028 11:18:33.391969  156473 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHPort
	I1028 11:18:33.392158  156473 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHKeyPath
	I1028 11:18:33.392672  156473 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHUsername
	I1028 11:18:33.392853  156473 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m04/id_rsa Username:docker}
	I1028 11:18:33.482042  156473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 11:18:33.538368  156473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 11:18:33.592327  156473 main.go:141] libmachine: Stopping "ha-928358-m04"...
	I1028 11:18:33.592371  156473 main.go:141] libmachine: (ha-928358-m04) Calling .GetState
	I1028 11:18:33.594353  156473 main.go:141] libmachine: (ha-928358-m04) Calling .Stop
	I1028 11:18:33.598289  156473 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 0/120
	I1028 11:18:34.873679  156473 main.go:141] libmachine: (ha-928358-m04) Calling .GetState
	I1028 11:18:34.875003  156473 main.go:141] libmachine: Machine "ha-928358-m04" was stopped.
	I1028 11:18:34.875021  156473 stop.go:75] duration metric: took 1.487014296s to stop
	I1028 11:18:34.875059  156473 stop.go:39] StopHost: ha-928358-m03
	I1028 11:18:34.875379  156473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:18:34.875428  156473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:18:34.890838  156473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I1028 11:18:34.891418  156473 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:18:34.891941  156473 main.go:141] libmachine: Using API Version  1
	I1028 11:18:34.892017  156473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:18:34.892424  156473 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:18:34.894445  156473 out.go:177] * Stopping node "ha-928358-m03"  ...
	I1028 11:18:34.895772  156473 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 11:18:34.895798  156473 main.go:141] libmachine: (ha-928358-m03) Calling .DriverName
	I1028 11:18:34.896028  156473 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 11:18:34.896063  156473 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHHostname
	I1028 11:18:34.899121  156473 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:18:34.899565  156473 main.go:141] libmachine: (ha-928358-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d3:f9", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:13:28 +0000 UTC Type:0 Mac:52:54:00:7e:d3:f9 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-928358-m03 Clientid:01:52:54:00:7e:d3:f9}
	I1028 11:18:34.899610  156473 main.go:141] libmachine: (ha-928358-m03) DBG | domain ha-928358-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:7e:d3:f9 in network mk-ha-928358
	I1028 11:18:34.899666  156473 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHPort
	I1028 11:18:34.899810  156473 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHKeyPath
	I1028 11:18:34.899938  156473 main.go:141] libmachine: (ha-928358-m03) Calling .GetSSHUsername
	I1028 11:18:34.900066  156473 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m03/id_rsa Username:docker}
	I1028 11:18:34.988002  156473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 11:18:35.024007  156473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 11:18:35.079881  156473 main.go:141] libmachine: Stopping "ha-928358-m03"...
	I1028 11:18:35.079923  156473 main.go:141] libmachine: (ha-928358-m03) Calling .GetState
	I1028 11:18:35.081395  156473 main.go:141] libmachine: (ha-928358-m03) Calling .Stop
	I1028 11:18:35.085222  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 0/120
	I1028 11:18:36.086725  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 1/120
	I1028 11:18:37.088336  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 2/120
	I1028 11:18:38.089858  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 3/120
	I1028 11:18:39.091336  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 4/120
	I1028 11:18:40.093732  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 5/120
	I1028 11:18:41.095289  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 6/120
	I1028 11:18:42.096607  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 7/120
	I1028 11:18:43.098156  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 8/120
	I1028 11:18:44.099839  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 9/120
	I1028 11:18:45.101974  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 10/120
	I1028 11:18:46.104207  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 11/120
	I1028 11:18:47.105813  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 12/120
	I1028 11:18:48.107334  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 13/120
	I1028 11:18:49.108976  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 14/120
	I1028 11:18:50.111038  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 15/120
	I1028 11:18:51.112873  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 16/120
	I1028 11:18:52.114318  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 17/120
	I1028 11:18:53.116047  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 18/120
	I1028 11:18:54.117653  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 19/120
	I1028 11:18:55.119855  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 20/120
	I1028 11:18:56.121404  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 21/120
	I1028 11:18:57.122920  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 22/120
	I1028 11:18:58.124151  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 23/120
	I1028 11:18:59.125714  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 24/120
	I1028 11:19:00.127231  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 25/120
	I1028 11:19:01.128713  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 26/120
	I1028 11:19:02.130271  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 27/120
	I1028 11:19:03.131947  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 28/120
	I1028 11:19:04.133408  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 29/120
	I1028 11:19:05.135242  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 30/120
	I1028 11:19:06.137007  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 31/120
	I1028 11:19:07.138569  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 32/120
	I1028 11:19:08.140323  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 33/120
	I1028 11:19:09.141763  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 34/120
	I1028 11:19:10.143746  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 35/120
	I1028 11:19:11.145444  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 36/120
	I1028 11:19:12.147076  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 37/120
	I1028 11:19:13.148681  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 38/120
	I1028 11:19:14.150261  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 39/120
	I1028 11:19:15.151596  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 40/120
	I1028 11:19:16.153189  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 41/120
	I1028 11:19:17.154939  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 42/120
	I1028 11:19:18.156357  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 43/120
	I1028 11:19:19.157832  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 44/120
	I1028 11:19:20.160285  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 45/120
	I1028 11:19:21.161838  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 46/120
	I1028 11:19:22.163200  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 47/120
	I1028 11:19:23.164673  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 48/120
	I1028 11:19:24.166220  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 49/120
	I1028 11:19:25.167990  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 50/120
	I1028 11:19:26.169566  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 51/120
	I1028 11:19:27.171012  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 52/120
	I1028 11:19:28.172494  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 53/120
	I1028 11:19:29.173947  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 54/120
	I1028 11:19:30.175670  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 55/120
	I1028 11:19:31.176924  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 56/120
	I1028 11:19:32.179506  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 57/120
	I1028 11:19:33.180709  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 58/120
	I1028 11:19:34.182711  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 59/120
	I1028 11:19:35.184659  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 60/120
	I1028 11:19:36.186240  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 61/120
	I1028 11:19:37.187692  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 62/120
	I1028 11:19:38.189000  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 63/120
	I1028 11:19:39.190388  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 64/120
	I1028 11:19:40.192374  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 65/120
	I1028 11:19:41.193693  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 66/120
	I1028 11:19:42.195970  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 67/120
	I1028 11:19:43.197384  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 68/120
	I1028 11:19:44.198879  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 69/120
	I1028 11:19:45.200675  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 70/120
	I1028 11:19:46.202104  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 71/120
	I1028 11:19:47.203588  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 72/120
	I1028 11:19:48.204924  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 73/120
	I1028 11:19:49.206281  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 74/120
	I1028 11:19:50.208076  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 75/120
	I1028 11:19:51.209564  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 76/120
	I1028 11:19:52.210882  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 77/120
	I1028 11:19:53.212400  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 78/120
	I1028 11:19:54.213790  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 79/120
	I1028 11:19:55.215572  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 80/120
	I1028 11:19:56.217083  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 81/120
	I1028 11:19:57.218555  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 82/120
	I1028 11:19:58.220035  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 83/120
	I1028 11:19:59.221322  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 84/120
	I1028 11:20:00.222885  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 85/120
	I1028 11:20:01.224446  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 86/120
	I1028 11:20:02.226185  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 87/120
	I1028 11:20:03.228033  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 88/120
	I1028 11:20:04.230163  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 89/120
	I1028 11:20:05.232319  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 90/120
	I1028 11:20:06.233943  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 91/120
	I1028 11:20:07.235908  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 92/120
	I1028 11:20:08.237686  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 93/120
	I1028 11:20:09.239878  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 94/120
	I1028 11:20:10.241933  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 95/120
	I1028 11:20:11.243154  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 96/120
	I1028 11:20:12.244655  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 97/120
	I1028 11:20:13.246074  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 98/120
	I1028 11:20:14.247582  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 99/120
	I1028 11:20:15.249417  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 100/120
	I1028 11:20:16.250763  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 101/120
	I1028 11:20:17.252090  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 102/120
	I1028 11:20:18.254259  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 103/120
	I1028 11:20:19.256013  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 104/120
	I1028 11:20:20.257804  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 105/120
	I1028 11:20:21.259249  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 106/120
	I1028 11:20:22.260615  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 107/120
	I1028 11:20:23.262101  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 108/120
	I1028 11:20:24.263551  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 109/120
	I1028 11:20:25.265024  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 110/120
	I1028 11:20:26.266494  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 111/120
	I1028 11:20:27.267899  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 112/120
	I1028 11:20:28.269235  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 113/120
	I1028 11:20:29.270691  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 114/120
	I1028 11:20:30.272407  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 115/120
	I1028 11:20:31.273896  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 116/120
	I1028 11:20:32.275238  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 117/120
	I1028 11:20:33.276754  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 118/120
	I1028 11:20:34.278430  156473 main.go:141] libmachine: (ha-928358-m03) Waiting for machine to stop 119/120
	I1028 11:20:35.279366  156473 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 11:20:35.279457  156473 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 11:20:35.281730  156473 out.go:201] 
	W1028 11:20:35.283442  156473 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 11:20:35.283459  156473 out.go:270] * 
	* 
	W1028 11:20:35.286113  156473 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 11:20:35.287770  156473 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-928358 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-928358 --wait=true -v=7 --alsologtostderr
E1028 11:20:37.589388  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:22:38.998298  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:24:02.063688  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:25:09.886882  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-928358 --wait=true -v=7 --alsologtostderr: (4m48.458288539s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-928358
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-928358 -n ha-928358
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 logs -n 25: (2.664459065s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m04 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp testdata/cp-test.txt                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m03 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-928358 node stop m02 -v=7                                                    | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-928358 node start m02 -v=7                                                   | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-928358 -v=7                                                          | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-928358 -v=7                                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-928358 --wait=true -v=7                                                   | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:20 UTC | 28 Oct 24 11:25 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-928358                                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:25 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:20:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:20:35.338796  156977 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:20:35.338899  156977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:20:35.338906  156977 out.go:358] Setting ErrFile to fd 2...
	I1028 11:20:35.338910  156977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:20:35.339075  156977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:20:35.339607  156977 out.go:352] Setting JSON to false
	I1028 11:20:35.340561  156977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3778,"bootTime":1730110657,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:20:35.340678  156977 start.go:139] virtualization: kvm guest
	I1028 11:20:35.343305  156977 out.go:177] * [ha-928358] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:20:35.345040  156977 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:20:35.345064  156977 notify.go:220] Checking for updates...
	I1028 11:20:35.347910  156977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:20:35.349225  156977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:20:35.350728  156977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:20:35.352226  156977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:20:35.353749  156977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:20:35.355634  156977 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:20:35.355759  156977 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:20:35.356209  156977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:20:35.356258  156977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:20:35.372245  156977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40257
	I1028 11:20:35.372832  156977 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:20:35.373413  156977 main.go:141] libmachine: Using API Version  1
	I1028 11:20:35.373439  156977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:20:35.373874  156977 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:20:35.374108  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:20:35.411493  156977 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:20:35.413034  156977 start.go:297] selected driver: kvm2
	I1028 11:20:35.413050  156977 start.go:901] validating driver "kvm2" against &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:20:35.413196  156977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:20:35.413572  156977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:20:35.413687  156977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:20:35.429741  156977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:20:35.430561  156977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:20:35.430611  156977 cni.go:84] Creating CNI manager for ""
	I1028 11:20:35.430685  156977 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 11:20:35.430767  156977 start.go:340] cluster config:
	{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:20:35.430922  156977 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:20:35.432762  156977 out.go:177] * Starting "ha-928358" primary control-plane node in "ha-928358" cluster
	I1028 11:20:35.434180  156977 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:20:35.434230  156977 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:20:35.434247  156977 cache.go:56] Caching tarball of preloaded images
	I1028 11:20:35.434328  156977 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:20:35.434342  156977 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:20:35.434496  156977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:20:35.434733  156977 start.go:360] acquireMachinesLock for ha-928358: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:20:35.434787  156977 start.go:364] duration metric: took 32.858µs to acquireMachinesLock for "ha-928358"
	I1028 11:20:35.434808  156977 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:20:35.434818  156977 fix.go:54] fixHost starting: 
	I1028 11:20:35.435131  156977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:20:35.435173  156977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:20:35.450347  156977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I1028 11:20:35.450870  156977 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:20:35.451422  156977 main.go:141] libmachine: Using API Version  1
	I1028 11:20:35.451447  156977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:20:35.451822  156977 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:20:35.451992  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:20:35.452176  156977 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:20:35.453696  156977 fix.go:112] recreateIfNeeded on ha-928358: state=Running err=<nil>
	W1028 11:20:35.453721  156977 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:20:35.455874  156977 out.go:177] * Updating the running kvm2 "ha-928358" VM ...
	I1028 11:20:35.457399  156977 machine.go:93] provisionDockerMachine start ...
	I1028 11:20:35.457415  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:20:35.457638  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.460254  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.460657  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.460677  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.460834  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:35.460995  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.461144  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.461234  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:35.461351  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:35.461588  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:35.461604  156977 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:20:35.598746  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:20:35.598818  156977 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:20:35.599099  156977 buildroot.go:166] provisioning hostname "ha-928358"
	I1028 11:20:35.599120  156977 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:20:35.599341  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.602206  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.602705  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.602742  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.602861  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:35.603059  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.603229  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.603340  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:35.603523  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:35.603684  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:35.603695  156977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358 && echo "ha-928358" | sudo tee /etc/hostname
	I1028 11:20:35.744650  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:20:35.744676  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.747975  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.748379  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.748398  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.748650  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:35.748866  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.749054  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.749193  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:35.749368  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:35.749563  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:35.749580  156977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:20:35.870632  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:20:35.870667  156977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:20:35.870707  156977 buildroot.go:174] setting up certificates
	I1028 11:20:35.870722  156977 provision.go:84] configureAuth start
	I1028 11:20:35.870733  156977 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:20:35.870983  156977 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:20:35.873799  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.874181  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.874216  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.874340  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.876852  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.877205  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.877234  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.877427  156977 provision.go:143] copyHostCerts
	I1028 11:20:35.877456  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:20:35.877505  156977 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:20:35.877517  156977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:20:35.877608  156977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:20:35.877700  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:20:35.877718  156977 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:20:35.877725  156977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:20:35.877750  156977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:20:35.877816  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:20:35.877841  156977 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:20:35.877851  156977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:20:35.877899  156977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:20:35.877975  156977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358 san=[127.0.0.1 192.168.39.206 ha-928358 localhost minikube]
	I1028 11:20:36.016977  156977 provision.go:177] copyRemoteCerts
	I1028 11:20:36.017039  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:20:36.017063  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:36.019711  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.019996  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:36.020035  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.020285  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:36.020466  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:36.020614  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:36.020743  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:20:36.113723  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:20:36.113789  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:20:36.143334  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:20:36.143436  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 11:20:36.173234  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:20:36.173315  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:20:36.203798  156977 provision.go:87] duration metric: took 333.060293ms to configureAuth
	I1028 11:20:36.203834  156977 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:20:36.204100  156977 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:20:36.204174  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:36.206849  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.207308  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:36.207339  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.207580  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:36.207778  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:36.207958  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:36.208103  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:36.208257  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:36.208468  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:36.208492  156977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:22:07.082955  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:22:07.083013  156977 machine.go:96] duration metric: took 1m31.625600557s to provisionDockerMachine
	I1028 11:22:07.083043  156977 start.go:293] postStartSetup for "ha-928358" (driver="kvm2")
	I1028 11:22:07.083057  156977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:22:07.083082  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.083429  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:22:07.083462  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.086615  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.087048  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.087071  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.087251  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.087424  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.087590  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.087724  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:22:07.177070  156977 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:22:07.182535  156977 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:22:07.182564  156977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:22:07.182633  156977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:22:07.182714  156977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:22:07.182726  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:22:07.182806  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:22:07.192962  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:22:07.217920  156977 start.go:296] duration metric: took 134.858002ms for postStartSetup
	I1028 11:22:07.217966  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.218279  156977 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1028 11:22:07.218310  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.220813  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.221239  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.221259  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.221477  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.221659  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.221828  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.221959  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	W1028 11:22:07.314200  156977 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1028 11:22:07.314233  156977 fix.go:56] duration metric: took 1m31.879416103s for fixHost
	I1028 11:22:07.314253  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.316966  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.317337  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.317364  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.317542  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.317744  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.317912  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.318078  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.318234  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:22:07.318400  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:22:07.318410  156977 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:22:07.434679  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114527.403223915
	
	I1028 11:22:07.434701  156977 fix.go:216] guest clock: 1730114527.403223915
	I1028 11:22:07.434725  156977 fix.go:229] Guest: 2024-10-28 11:22:07.403223915 +0000 UTC Remote: 2024-10-28 11:22:07.31423947 +0000 UTC m=+92.014183183 (delta=88.984445ms)
	I1028 11:22:07.434757  156977 fix.go:200] guest clock delta is within tolerance: 88.984445ms
	I1028 11:22:07.434762  156977 start.go:83] releasing machines lock for "ha-928358", held for 1m31.999962557s
	I1028 11:22:07.434782  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.435062  156977 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:22:07.438263  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.438680  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.438701  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.438933  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.439494  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.439655  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.439737  156977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:22:07.439804  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.439847  156977 ssh_runner.go:195] Run: cat /version.json
	I1028 11:22:07.439866  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.442590  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.442847  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.442976  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.443006  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.443163  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.443317  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.443338  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.443376  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.443512  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.443532  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.443693  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.443712  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:22:07.443823  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.443985  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:22:07.527636  156977 ssh_runner.go:195] Run: systemctl --version
	I1028 11:22:07.551472  156977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:22:07.716136  156977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:22:07.725490  156977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:22:07.725589  156977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:22:07.735759  156977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:22:07.735788  156977 start.go:495] detecting cgroup driver to use...
	I1028 11:22:07.735846  156977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:22:07.753424  156977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:22:07.768160  156977 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:22:07.768245  156977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:22:07.782428  156977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:22:07.812595  156977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:22:08.026681  156977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:22:08.238609  156977 docker.go:233] disabling docker service ...
	I1028 11:22:08.238690  156977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:22:08.258943  156977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:22:08.274112  156977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:22:08.422485  156977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:22:08.574129  156977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:22:08.589416  156977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:22:08.609417  156977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:22:08.609481  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.621152  156977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:22:08.621222  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.632765  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.643988  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.655174  156977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:22:08.666849  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.678641  156977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.691551  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.702646  156977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:22:08.712901  156977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:22:08.722980  156977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:22:08.871106  156977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:22:18.475752  156977 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.604603351s)
	I1028 11:22:18.475795  156977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:22:18.475852  156977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:22:18.481274  156977 start.go:563] Will wait 60s for crictl version
	I1028 11:22:18.481347  156977 ssh_runner.go:195] Run: which crictl
	I1028 11:22:18.485688  156977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:22:18.525904  156977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:22:18.525997  156977 ssh_runner.go:195] Run: crio --version
	I1028 11:22:18.556484  156977 ssh_runner.go:195] Run: crio --version
	I1028 11:22:18.589193  156977 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:22:18.590781  156977 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:22:18.594048  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:18.594604  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:18.594634  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:18.594887  156977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:22:18.599915  156977 kubeadm.go:883] updating cluster {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:22:18.600043  156977 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:22:18.600087  156977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:22:18.651841  156977 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:22:18.651865  156977 crio.go:433] Images already preloaded, skipping extraction
	I1028 11:22:18.651916  156977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:22:18.688227  156977 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:22:18.688252  156977 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:22:18.688262  156977 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1028 11:22:18.688359  156977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:22:18.688421  156977 ssh_runner.go:195] Run: crio config
	I1028 11:22:18.743571  156977 cni.go:84] Creating CNI manager for ""
	I1028 11:22:18.743597  156977 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 11:22:18.743617  156977 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:22:18.743651  156977 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-928358 NodeName:ha-928358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:22:18.743802  156977 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-928358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:22:18.743828  156977 kube-vip.go:115] generating kube-vip config ...
	I1028 11:22:18.743877  156977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:22:18.756673  156977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:22:18.756779  156977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:22:18.756833  156977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:22:18.766990  156977 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:22:18.767105  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:22:18.777382  156977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:22:18.794888  156977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:22:18.812778  156977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:22:18.831814  156977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:22:18.851515  156977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:22:18.856305  156977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:22:19.005738  156977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:22:19.022355  156977 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.206
	I1028 11:22:19.022386  156977 certs.go:194] generating shared ca certs ...
	I1028 11:22:19.022409  156977 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:22:19.022612  156977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:22:19.022666  156977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:22:19.022680  156977 certs.go:256] generating profile certs ...
	I1028 11:22:19.022777  156977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:22:19.022827  156977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72
	I1028 11:22:19.022848  156977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.44 192.168.39.254]
	I1028 11:22:19.143164  156977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72 ...
	I1028 11:22:19.143197  156977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72: {Name:mkd37f1f27bd058ac3af0fa3cfa58d69b3d7e1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:22:19.143366  156977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72 ...
	I1028 11:22:19.143379  156977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72: {Name:mk8e5edf0115cd0224cc2401fdf9246b44ea90c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:22:19.143446  156977 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:22:19.143595  156977 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:22:19.143724  156977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:22:19.143741  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:22:19.143754  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:22:19.143766  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:22:19.143776  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:22:19.143786  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:22:19.143801  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:22:19.143814  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:22:19.143826  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:22:19.143872  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:22:19.143899  156977 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:22:19.143909  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:22:19.143933  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:22:19.143954  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:22:19.143974  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:22:19.144011  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:22:19.144040  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.144054  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.144066  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.144693  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:22:19.171265  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:22:19.196726  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:22:19.222871  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:22:19.249240  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 11:22:19.275040  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:22:19.301016  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:22:19.329424  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:22:19.357472  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:22:19.385539  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:22:19.413905  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:22:19.441378  156977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:22:19.460609  156977 ssh_runner.go:195] Run: openssl version
	I1028 11:22:19.467117  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:22:19.480476  156977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.485333  156977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.485387  156977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.491494  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:22:19.503139  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:22:19.516708  156977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.521964  156977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.522028  156977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.528470  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:22:19.540653  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:22:19.552935  156977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.557805  156977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.557863  156977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.563968  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:22:19.574232  156977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:22:19.579123  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:22:19.585002  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:22:19.590800  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:22:19.596614  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:22:19.602738  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:22:19.608790  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:22:19.615016  156977 kubeadm.go:392] StartCluster: {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:22:19.615137  156977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:22:19.615192  156977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:22:19.657901  156977 cri.go:89] found id: "b064dfe6c5d7d8f7052c673794d408f0e96300f31c099061f8c0108afcbb82dd"
	I1028 11:22:19.657924  156977 cri.go:89] found id: "e4d92e9c68286fe6e2c8e9d7d34b5d8a225f83d8a0b5f10b639116b5e6ebad90"
	I1028 11:22:19.657928  156977 cri.go:89] found id: "85323441d696e2bc04ed1c5f6adb016c03366f6f2c0efd7ef393bd5182ffb779"
	I1028 11:22:19.657931  156977 cri.go:89] found id: "2fc124c51f095945362ef5a4ea2e88b292d6961ee8330e302a2284112ebbf713"
	I1028 11:22:19.657934  156977 cri.go:89] found id: "70072bdb6487e6d834047fa5c36cd7a624207d3efae1325ebb30ca7aa06851fc"
	I1028 11:22:19.657937  156977 cri.go:89] found id: "267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134"
	I1028 11:22:19.657940  156977 cri.go:89] found id: "0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962"
	I1028 11:22:19.657942  156977 cri.go:89] found id: "93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a"
	I1028 11:22:19.657945  156977 cri.go:89] found id: "6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7"
	I1028 11:22:19.657959  156977 cri.go:89] found id: "b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653"
	I1028 11:22:19.657962  156977 cri.go:89] found id: "a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854"
	I1028 11:22:19.657964  156977 cri.go:89] found id: "f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52"
	I1028 11:22:19.657966  156977 cri.go:89] found id: "e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef"
	I1028 11:22:19.657969  156977 cri.go:89] found id: "1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583"
	I1028 11:22:19.657973  156977 cri.go:89] found id: ""
	I1028 11:22:19.658025  156977 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-928358 -n ha-928358
helpers_test.go:261: (dbg) Run:  kubectl --context ha-928358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (413.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 stop -v=7 --alsologtostderr
E1028 11:27:38.998960  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-928358 stop -v=7 --alsologtostderr: exit status 82 (2m0.493044932s)

                                                
                                                
-- stdout --
	* Stopping node "ha-928358-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:25:44.582310  158927 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:25:44.582436  158927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:25:44.582445  158927 out.go:358] Setting ErrFile to fd 2...
	I1028 11:25:44.582449  158927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:25:44.582619  158927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:25:44.582858  158927 out.go:352] Setting JSON to false
	I1028 11:25:44.582931  158927 mustload.go:65] Loading cluster: ha-928358
	I1028 11:25:44.583335  158927 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:25:44.583416  158927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:25:44.583600  158927 mustload.go:65] Loading cluster: ha-928358
	I1028 11:25:44.583740  158927 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:25:44.583774  158927 stop.go:39] StopHost: ha-928358-m04
	I1028 11:25:44.584158  158927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:25:44.584208  158927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:25:44.599814  158927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35161
	I1028 11:25:44.600419  158927 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:25:44.601042  158927 main.go:141] libmachine: Using API Version  1
	I1028 11:25:44.601074  158927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:25:44.601389  158927 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:25:44.603951  158927 out.go:177] * Stopping node "ha-928358-m04"  ...
	I1028 11:25:44.606087  158927 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 11:25:44.606124  158927 main.go:141] libmachine: (ha-928358-m04) Calling .DriverName
	I1028 11:25:44.606389  158927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 11:25:44.606417  158927 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHHostname
	I1028 11:25:44.610150  158927 main.go:141] libmachine: (ha-928358-m04) DBG | domain ha-928358-m04 has defined MAC address 52:54:00:6d:e8:c6 in network mk-ha-928358
	I1028 11:25:44.610644  158927 main.go:141] libmachine: (ha-928358-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:e8:c6", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:25:11 +0000 UTC Type:0 Mac:52:54:00:6d:e8:c6 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-928358-m04 Clientid:01:52:54:00:6d:e8:c6}
	I1028 11:25:44.610679  158927 main.go:141] libmachine: (ha-928358-m04) DBG | domain ha-928358-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:6d:e8:c6 in network mk-ha-928358
	I1028 11:25:44.610878  158927 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHPort
	I1028 11:25:44.611051  158927 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHKeyPath
	I1028 11:25:44.611184  158927 main.go:141] libmachine: (ha-928358-m04) Calling .GetSSHUsername
	I1028 11:25:44.611328  158927 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358-m04/id_rsa Username:docker}
	I1028 11:25:44.704679  158927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 11:25:44.758573  158927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 11:25:44.811357  158927 main.go:141] libmachine: Stopping "ha-928358-m04"...
	I1028 11:25:44.811393  158927 main.go:141] libmachine: (ha-928358-m04) Calling .GetState
	I1028 11:25:44.813501  158927 main.go:141] libmachine: (ha-928358-m04) Calling .Stop
	I1028 11:25:44.817978  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 0/120
	I1028 11:25:45.819338  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 1/120
	I1028 11:25:46.820567  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 2/120
	I1028 11:25:47.821938  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 3/120
	I1028 11:25:48.823866  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 4/120
	I1028 11:25:49.825872  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 5/120
	I1028 11:25:50.827376  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 6/120
	I1028 11:25:51.828980  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 7/120
	I1028 11:25:52.830609  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 8/120
	I1028 11:25:53.832392  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 9/120
	I1028 11:25:54.834657  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 10/120
	I1028 11:25:55.836251  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 11/120
	I1028 11:25:56.837802  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 12/120
	I1028 11:25:57.840119  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 13/120
	I1028 11:25:58.841683  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 14/120
	I1028 11:25:59.843980  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 15/120
	I1028 11:26:00.845571  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 16/120
	I1028 11:26:01.847676  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 17/120
	I1028 11:26:02.849494  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 18/120
	I1028 11:26:03.850807  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 19/120
	I1028 11:26:04.853161  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 20/120
	I1028 11:26:05.854442  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 21/120
	I1028 11:26:06.856120  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 22/120
	I1028 11:26:07.857436  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 23/120
	I1028 11:26:08.858923  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 24/120
	I1028 11:26:09.860649  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 25/120
	I1028 11:26:10.862231  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 26/120
	I1028 11:26:11.864145  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 27/120
	I1028 11:26:12.865874  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 28/120
	I1028 11:26:13.867359  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 29/120
	I1028 11:26:14.869493  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 30/120
	I1028 11:26:15.870691  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 31/120
	I1028 11:26:16.872316  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 32/120
	I1028 11:26:17.873570  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 33/120
	I1028 11:26:18.874908  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 34/120
	I1028 11:26:19.877208  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 35/120
	I1028 11:26:20.878587  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 36/120
	I1028 11:26:21.879973  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 37/120
	I1028 11:26:22.881458  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 38/120
	I1028 11:26:23.883003  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 39/120
	I1028 11:26:24.884870  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 40/120
	I1028 11:26:25.886223  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 41/120
	I1028 11:26:26.888126  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 42/120
	I1028 11:26:27.890727  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 43/120
	I1028 11:26:28.892149  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 44/120
	I1028 11:26:29.894091  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 45/120
	I1028 11:26:30.896456  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 46/120
	I1028 11:26:31.898237  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 47/120
	I1028 11:26:32.899660  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 48/120
	I1028 11:26:33.901204  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 49/120
	I1028 11:26:34.903591  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 50/120
	I1028 11:26:35.904936  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 51/120
	I1028 11:26:36.906334  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 52/120
	I1028 11:26:37.908160  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 53/120
	I1028 11:26:38.909410  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 54/120
	I1028 11:26:39.910937  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 55/120
	I1028 11:26:40.912050  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 56/120
	I1028 11:26:41.913304  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 57/120
	I1028 11:26:42.914623  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 58/120
	I1028 11:26:43.916283  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 59/120
	I1028 11:26:44.918527  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 60/120
	I1028 11:26:45.919906  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 61/120
	I1028 11:26:46.921239  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 62/120
	I1028 11:26:47.922656  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 63/120
	I1028 11:26:48.924228  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 64/120
	I1028 11:26:49.926331  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 65/120
	I1028 11:26:50.927754  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 66/120
	I1028 11:26:51.929188  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 67/120
	I1028 11:26:52.930644  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 68/120
	I1028 11:26:53.932115  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 69/120
	I1028 11:26:54.934584  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 70/120
	I1028 11:26:55.935847  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 71/120
	I1028 11:26:56.937430  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 72/120
	I1028 11:26:57.939038  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 73/120
	I1028 11:26:58.940428  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 74/120
	I1028 11:26:59.942161  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 75/120
	I1028 11:27:00.943566  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 76/120
	I1028 11:27:01.945037  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 77/120
	I1028 11:27:02.946624  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 78/120
	I1028 11:27:03.947913  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 79/120
	I1028 11:27:04.949577  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 80/120
	I1028 11:27:05.951042  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 81/120
	I1028 11:27:06.952407  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 82/120
	I1028 11:27:07.953815  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 83/120
	I1028 11:27:08.955257  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 84/120
	I1028 11:27:09.957407  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 85/120
	I1028 11:27:10.958784  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 86/120
	I1028 11:27:11.960165  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 87/120
	I1028 11:27:12.961559  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 88/120
	I1028 11:27:13.963028  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 89/120
	I1028 11:27:14.965277  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 90/120
	I1028 11:27:15.966659  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 91/120
	I1028 11:27:16.968200  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 92/120
	I1028 11:27:17.969889  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 93/120
	I1028 11:27:18.971991  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 94/120
	I1028 11:27:19.973837  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 95/120
	I1028 11:27:20.976436  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 96/120
	I1028 11:27:21.977978  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 97/120
	I1028 11:27:22.979734  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 98/120
	I1028 11:27:23.981680  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 99/120
	I1028 11:27:24.983262  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 100/120
	I1028 11:27:25.984630  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 101/120
	I1028 11:27:26.986106  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 102/120
	I1028 11:27:27.988048  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 103/120
	I1028 11:27:28.989625  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 104/120
	I1028 11:27:29.991883  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 105/120
	I1028 11:27:30.993954  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 106/120
	I1028 11:27:31.995414  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 107/120
	I1028 11:27:32.996705  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 108/120
	I1028 11:27:33.998083  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 109/120
	I1028 11:27:34.999938  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 110/120
	I1028 11:27:36.001303  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 111/120
	I1028 11:27:37.002970  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 112/120
	I1028 11:27:38.005496  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 113/120
	I1028 11:27:39.006927  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 114/120
	I1028 11:27:40.008996  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 115/120
	I1028 11:27:41.011062  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 116/120
	I1028 11:27:42.012655  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 117/120
	I1028 11:27:43.014659  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 118/120
	I1028 11:27:44.016062  158927 main.go:141] libmachine: (ha-928358-m04) Waiting for machine to stop 119/120
	I1028 11:27:45.017475  158927 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 11:27:45.017570  158927 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 11:27:45.019824  158927 out.go:201] 
	W1028 11:27:45.021415  158927 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 11:27:45.021431  158927 out.go:270] * 
	* 
	W1028 11:27:45.023827  158927 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 11:27:45.025416  158927 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-928358 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr: (19.042609149s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-928358 -n ha-928358
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 logs -n 25: (2.214881466s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m04 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp testdata/cp-test.txt                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt                      |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358 sudo cat                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358.txt                                |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m02 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n                                                                | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | ha-928358-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-928358 ssh -n ha-928358-m03 sudo cat                                         | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC | 28 Oct 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-928358 node stop m02 -v=7                                                    | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-928358 node start m02 -v=7                                                   | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-928358 -v=7                                                          | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-928358 -v=7                                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-928358 --wait=true -v=7                                                   | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:20 UTC | 28 Oct 24 11:25 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-928358                                                               | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:25 UTC |                     |
	| node    | ha-928358 node delete m03 -v=7                                                  | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:25 UTC | 28 Oct 24 11:25 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-928358 stop -v=7                                                             | ha-928358 | jenkins | v1.34.0 | 28 Oct 24 11:25 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:20:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:20:35.338796  156977 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:20:35.338899  156977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:20:35.338906  156977 out.go:358] Setting ErrFile to fd 2...
	I1028 11:20:35.338910  156977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:20:35.339075  156977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:20:35.339607  156977 out.go:352] Setting JSON to false
	I1028 11:20:35.340561  156977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3778,"bootTime":1730110657,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:20:35.340678  156977 start.go:139] virtualization: kvm guest
	I1028 11:20:35.343305  156977 out.go:177] * [ha-928358] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:20:35.345040  156977 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:20:35.345064  156977 notify.go:220] Checking for updates...
	I1028 11:20:35.347910  156977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:20:35.349225  156977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:20:35.350728  156977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:20:35.352226  156977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:20:35.353749  156977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:20:35.355634  156977 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:20:35.355759  156977 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:20:35.356209  156977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:20:35.356258  156977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:20:35.372245  156977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40257
	I1028 11:20:35.372832  156977 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:20:35.373413  156977 main.go:141] libmachine: Using API Version  1
	I1028 11:20:35.373439  156977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:20:35.373874  156977 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:20:35.374108  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:20:35.411493  156977 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:20:35.413034  156977 start.go:297] selected driver: kvm2
	I1028 11:20:35.413050  156977 start.go:901] validating driver "kvm2" against &{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:20:35.413196  156977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:20:35.413572  156977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:20:35.413687  156977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:20:35.429741  156977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:20:35.430561  156977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:20:35.430611  156977 cni.go:84] Creating CNI manager for ""
	I1028 11:20:35.430685  156977 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 11:20:35.430767  156977 start.go:340] cluster config:
	{Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:20:35.430922  156977 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:20:35.432762  156977 out.go:177] * Starting "ha-928358" primary control-plane node in "ha-928358" cluster
	I1028 11:20:35.434180  156977 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:20:35.434230  156977 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:20:35.434247  156977 cache.go:56] Caching tarball of preloaded images
	I1028 11:20:35.434328  156977 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:20:35.434342  156977 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:20:35.434496  156977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/config.json ...
	I1028 11:20:35.434733  156977 start.go:360] acquireMachinesLock for ha-928358: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:20:35.434787  156977 start.go:364] duration metric: took 32.858µs to acquireMachinesLock for "ha-928358"
	I1028 11:20:35.434808  156977 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:20:35.434818  156977 fix.go:54] fixHost starting: 
	I1028 11:20:35.435131  156977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:20:35.435173  156977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:20:35.450347  156977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I1028 11:20:35.450870  156977 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:20:35.451422  156977 main.go:141] libmachine: Using API Version  1
	I1028 11:20:35.451447  156977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:20:35.451822  156977 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:20:35.451992  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:20:35.452176  156977 main.go:141] libmachine: (ha-928358) Calling .GetState
	I1028 11:20:35.453696  156977 fix.go:112] recreateIfNeeded on ha-928358: state=Running err=<nil>
	W1028 11:20:35.453721  156977 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:20:35.455874  156977 out.go:177] * Updating the running kvm2 "ha-928358" VM ...
	I1028 11:20:35.457399  156977 machine.go:93] provisionDockerMachine start ...
	I1028 11:20:35.457415  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:20:35.457638  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.460254  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.460657  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.460677  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.460834  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:35.460995  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.461144  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.461234  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:35.461351  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:35.461588  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:35.461604  156977 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:20:35.598746  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:20:35.598818  156977 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:20:35.599099  156977 buildroot.go:166] provisioning hostname "ha-928358"
	I1028 11:20:35.599120  156977 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:20:35.599341  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.602206  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.602705  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.602742  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.602861  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:35.603059  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.603229  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.603340  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:35.603523  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:35.603684  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:35.603695  156977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-928358 && echo "ha-928358" | sudo tee /etc/hostname
	I1028 11:20:35.744650  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-928358
	
	I1028 11:20:35.744676  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.747975  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.748379  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.748398  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.748650  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:35.748866  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.749054  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:35.749193  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:35.749368  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:35.749563  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:35.749580  156977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-928358' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-928358/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-928358' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:20:35.870632  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:20:35.870667  156977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:20:35.870707  156977 buildroot.go:174] setting up certificates
	I1028 11:20:35.870722  156977 provision.go:84] configureAuth start
	I1028 11:20:35.870733  156977 main.go:141] libmachine: (ha-928358) Calling .GetMachineName
	I1028 11:20:35.870983  156977 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:20:35.873799  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.874181  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.874216  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.874340  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:35.876852  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.877205  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:35.877234  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:35.877427  156977 provision.go:143] copyHostCerts
	I1028 11:20:35.877456  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:20:35.877505  156977 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:20:35.877517  156977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:20:35.877608  156977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:20:35.877700  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:20:35.877718  156977 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:20:35.877725  156977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:20:35.877750  156977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:20:35.877816  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:20:35.877841  156977 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:20:35.877851  156977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:20:35.877899  156977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:20:35.877975  156977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.ha-928358 san=[127.0.0.1 192.168.39.206 ha-928358 localhost minikube]
	I1028 11:20:36.016977  156977 provision.go:177] copyRemoteCerts
	I1028 11:20:36.017039  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:20:36.017063  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:36.019711  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.019996  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:36.020035  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.020285  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:36.020466  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:36.020614  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:36.020743  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:20:36.113723  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:20:36.113789  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:20:36.143334  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:20:36.143436  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1028 11:20:36.173234  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:20:36.173315  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:20:36.203798  156977 provision.go:87] duration metric: took 333.060293ms to configureAuth
	I1028 11:20:36.203834  156977 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:20:36.204100  156977 config.go:182] Loaded profile config "ha-928358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:20:36.204174  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:20:36.206849  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.207308  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:20:36.207339  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:20:36.207580  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:20:36.207778  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:36.207958  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:20:36.208103  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:20:36.208257  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:20:36.208468  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:20:36.208492  156977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:22:07.082955  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:22:07.083013  156977 machine.go:96] duration metric: took 1m31.625600557s to provisionDockerMachine
	I1028 11:22:07.083043  156977 start.go:293] postStartSetup for "ha-928358" (driver="kvm2")
	I1028 11:22:07.083057  156977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:22:07.083082  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.083429  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:22:07.083462  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.086615  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.087048  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.087071  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.087251  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.087424  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.087590  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.087724  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:22:07.177070  156977 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:22:07.182535  156977 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:22:07.182564  156977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:22:07.182633  156977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:22:07.182714  156977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:22:07.182726  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:22:07.182806  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:22:07.192962  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:22:07.217920  156977 start.go:296] duration metric: took 134.858002ms for postStartSetup
	I1028 11:22:07.217966  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.218279  156977 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1028 11:22:07.218310  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.220813  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.221239  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.221259  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.221477  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.221659  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.221828  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.221959  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	W1028 11:22:07.314200  156977 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1028 11:22:07.314233  156977 fix.go:56] duration metric: took 1m31.879416103s for fixHost
	I1028 11:22:07.314253  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.316966  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.317337  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.317364  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.317542  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.317744  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.317912  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.318078  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.318234  156977 main.go:141] libmachine: Using SSH client type: native
	I1028 11:22:07.318400  156977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1028 11:22:07.318410  156977 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:22:07.434679  156977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730114527.403223915
	
	I1028 11:22:07.434701  156977 fix.go:216] guest clock: 1730114527.403223915
	I1028 11:22:07.434725  156977 fix.go:229] Guest: 2024-10-28 11:22:07.403223915 +0000 UTC Remote: 2024-10-28 11:22:07.31423947 +0000 UTC m=+92.014183183 (delta=88.984445ms)
	I1028 11:22:07.434757  156977 fix.go:200] guest clock delta is within tolerance: 88.984445ms
	I1028 11:22:07.434762  156977 start.go:83] releasing machines lock for "ha-928358", held for 1m31.999962557s
	I1028 11:22:07.434782  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.435062  156977 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:22:07.438263  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.438680  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.438701  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.438933  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.439494  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.439655  156977 main.go:141] libmachine: (ha-928358) Calling .DriverName
	I1028 11:22:07.439737  156977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:22:07.439804  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.439847  156977 ssh_runner.go:195] Run: cat /version.json
	I1028 11:22:07.439866  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHHostname
	I1028 11:22:07.442590  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.442847  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.442976  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.443006  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.443163  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.443317  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:07.443338  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:07.443376  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.443512  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHPort
	I1028 11:22:07.443532  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.443693  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHKeyPath
	I1028 11:22:07.443712  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:22:07.443823  156977 main.go:141] libmachine: (ha-928358) Calling .GetSSHUsername
	I1028 11:22:07.443985  156977 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/ha-928358/id_rsa Username:docker}
	I1028 11:22:07.527636  156977 ssh_runner.go:195] Run: systemctl --version
	I1028 11:22:07.551472  156977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:22:07.716136  156977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:22:07.725490  156977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:22:07.725589  156977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:22:07.735759  156977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:22:07.735788  156977 start.go:495] detecting cgroup driver to use...
	I1028 11:22:07.735846  156977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:22:07.753424  156977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:22:07.768160  156977 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:22:07.768245  156977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:22:07.782428  156977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:22:07.812595  156977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:22:08.026681  156977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:22:08.238609  156977 docker.go:233] disabling docker service ...
	I1028 11:22:08.238690  156977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:22:08.258943  156977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:22:08.274112  156977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:22:08.422485  156977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:22:08.574129  156977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:22:08.589416  156977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:22:08.609417  156977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:22:08.609481  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.621152  156977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:22:08.621222  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.632765  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.643988  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.655174  156977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:22:08.666849  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.678641  156977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.691551  156977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:22:08.702646  156977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:22:08.712901  156977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:22:08.722980  156977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:22:08.871106  156977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:22:18.475752  156977 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.604603351s)
	I1028 11:22:18.475795  156977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:22:18.475852  156977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:22:18.481274  156977 start.go:563] Will wait 60s for crictl version
	I1028 11:22:18.481347  156977 ssh_runner.go:195] Run: which crictl
	I1028 11:22:18.485688  156977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:22:18.525904  156977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:22:18.525997  156977 ssh_runner.go:195] Run: crio --version
	I1028 11:22:18.556484  156977 ssh_runner.go:195] Run: crio --version
	I1028 11:22:18.589193  156977 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:22:18.590781  156977 main.go:141] libmachine: (ha-928358) Calling .GetIP
	I1028 11:22:18.594048  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:18.594604  156977 main.go:141] libmachine: (ha-928358) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:b2:b7", ip: ""} in network mk-ha-928358: {Iface:virbr1 ExpiryTime:2024-10-28 12:11:14 +0000 UTC Type:0 Mac:52:54:00:dd:b2:b7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-928358 Clientid:01:52:54:00:dd:b2:b7}
	I1028 11:22:18.594634  156977 main.go:141] libmachine: (ha-928358) DBG | domain ha-928358 has defined IP address 192.168.39.206 and MAC address 52:54:00:dd:b2:b7 in network mk-ha-928358
	I1028 11:22:18.594887  156977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:22:18.599915  156977 kubeadm.go:883] updating cluster {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:22:18.600043  156977 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:22:18.600087  156977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:22:18.651841  156977 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:22:18.651865  156977 crio.go:433] Images already preloaded, skipping extraction
	I1028 11:22:18.651916  156977 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:22:18.688227  156977 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:22:18.688252  156977 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:22:18.688262  156977 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1028 11:22:18.688359  156977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-928358 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:22:18.688421  156977 ssh_runner.go:195] Run: crio config
	I1028 11:22:18.743571  156977 cni.go:84] Creating CNI manager for ""
	I1028 11:22:18.743597  156977 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1028 11:22:18.743617  156977 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:22:18.743651  156977 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-928358 NodeName:ha-928358 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:22:18.743802  156977 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-928358"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:22:18.743828  156977 kube-vip.go:115] generating kube-vip config ...
	I1028 11:22:18.743877  156977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1028 11:22:18.756673  156977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1028 11:22:18.756779  156977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.4
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1028 11:22:18.756833  156977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:22:18.766990  156977 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:22:18.767105  156977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1028 11:22:18.777382  156977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1028 11:22:18.794888  156977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:22:18.812778  156977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 11:22:18.831814  156977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1028 11:22:18.851515  156977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1028 11:22:18.856305  156977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:22:19.005738  156977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:22:19.022355  156977 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358 for IP: 192.168.39.206
	I1028 11:22:19.022386  156977 certs.go:194] generating shared ca certs ...
	I1028 11:22:19.022409  156977 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:22:19.022612  156977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:22:19.022666  156977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:22:19.022680  156977 certs.go:256] generating profile certs ...
	I1028 11:22:19.022777  156977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/client.key
	I1028 11:22:19.022827  156977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72
	I1028 11:22:19.022848  156977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206 192.168.39.15 192.168.39.44 192.168.39.254]
	I1028 11:22:19.143164  156977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72 ...
	I1028 11:22:19.143197  156977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72: {Name:mkd37f1f27bd058ac3af0fa3cfa58d69b3d7e1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:22:19.143366  156977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72 ...
	I1028 11:22:19.143379  156977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72: {Name:mk8e5edf0115cd0224cc2401fdf9246b44ea90c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:22:19.143446  156977 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt.128adb72 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt
	I1028 11:22:19.143595  156977 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key.128adb72 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key
	I1028 11:22:19.143724  156977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key
	I1028 11:22:19.143741  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:22:19.143754  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:22:19.143766  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:22:19.143776  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:22:19.143786  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:22:19.143801  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:22:19.143814  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:22:19.143826  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:22:19.143872  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:22:19.143899  156977 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:22:19.143909  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:22:19.143933  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:22:19.143954  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:22:19.143974  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:22:19.144011  156977 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:22:19.144040  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.144054  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.144066  156977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.144693  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:22:19.171265  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:22:19.196726  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:22:19.222871  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:22:19.249240  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 11:22:19.275040  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:22:19.301016  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:22:19.329424  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/ha-928358/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:22:19.357472  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:22:19.385539  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:22:19.413905  156977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:22:19.441378  156977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:22:19.460609  156977 ssh_runner.go:195] Run: openssl version
	I1028 11:22:19.467117  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:22:19.480476  156977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.485333  156977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.485387  156977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:22:19.491494  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:22:19.503139  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:22:19.516708  156977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.521964  156977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.522028  156977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:22:19.528470  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:22:19.540653  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:22:19.552935  156977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.557805  156977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.557863  156977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:22:19.563968  156977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:22:19.574232  156977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:22:19.579123  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:22:19.585002  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:22:19.590800  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:22:19.596614  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:22:19.602738  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:22:19.608790  156977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:22:19.615016  156977 kubeadm.go:392] StartCluster: {Name:ha-928358 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-928358 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.44 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.203 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:22:19.615137  156977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:22:19.615192  156977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:22:19.657901  156977 cri.go:89] found id: "b064dfe6c5d7d8f7052c673794d408f0e96300f31c099061f8c0108afcbb82dd"
	I1028 11:22:19.657924  156977 cri.go:89] found id: "e4d92e9c68286fe6e2c8e9d7d34b5d8a225f83d8a0b5f10b639116b5e6ebad90"
	I1028 11:22:19.657928  156977 cri.go:89] found id: "85323441d696e2bc04ed1c5f6adb016c03366f6f2c0efd7ef393bd5182ffb779"
	I1028 11:22:19.657931  156977 cri.go:89] found id: "2fc124c51f095945362ef5a4ea2e88b292d6961ee8330e302a2284112ebbf713"
	I1028 11:22:19.657934  156977 cri.go:89] found id: "70072bdb6487e6d834047fa5c36cd7a624207d3efae1325ebb30ca7aa06851fc"
	I1028 11:22:19.657937  156977 cri.go:89] found id: "267b82290689582a0a0ee8533614e3a18e3cf721bda943169042b0a2c0013134"
	I1028 11:22:19.657940  156977 cri.go:89] found id: "0ec81022134ba675fca1fe33fd56bb1c1b72660d7ab1204c49803f7726d2e962"
	I1028 11:22:19.657942  156977 cri.go:89] found id: "93fda9ea564e1bc6421f28b62f577bc80acbad6daa3cbe46ad7d10a2affccb6a"
	I1028 11:22:19.657945  156977 cri.go:89] found id: "6af78d85866c9190d2dc9f22b29f12fcb3588528f7cf6c7c87bbcd359e41f2a7"
	I1028 11:22:19.657959  156977 cri.go:89] found id: "b4500f47684e694f345ba53e5c499fc9a0afe5ee4b93fd6bac3dcecfdd4e6653"
	I1028 11:22:19.657962  156977 cri.go:89] found id: "a75ab3d16aba2288925d652b039911c92709c3eb1a43e63131d61fd58598b854"
	I1028 11:22:19.657964  156977 cri.go:89] found id: "f8221151573cfa2580df6dace415437b4000e4d91f1295508a899e63aa917e52"
	I1028 11:22:19.657966  156977 cri.go:89] found id: "e735b7e201a7d2d87909893e0ce3cbfab869a7f4002a4257cd296032066715ef"
	I1028 11:22:19.657969  156977 cri.go:89] found id: "1be8f3556358ee1f9109fa3d01c6b86a90663b094340123b80449ee7c8cbc583"
	I1028 11:22:19.657973  156977 cri.go:89] found id: ""
	I1028 11:22:19.658025  156977 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-928358 -n ha-928358
helpers_test.go:261: (dbg) Run:  kubectl --context ha-928358 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (333.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-450140
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-450140
E1028 11:45:09.886385  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-450140: exit status 82 (2m1.961378913s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-450140-m03"  ...
	* Stopping node "multinode-450140-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-450140" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-450140 --wait=true -v=8 --alsologtostderr
E1028 11:47:38.998217  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:48:12.954477  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-450140 --wait=true -v=8 --alsologtostderr: (3m28.287658549s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-450140
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-450140 -n multinode-450140
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 logs -n 25: (2.141700173s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile25711815/001/cp-test_multinode-450140-m02.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140:/home/docker/cp-test_multinode-450140-m02_multinode-450140.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140 sudo cat                                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m02_multinode-450140.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03:/home/docker/cp-test_multinode-450140-m02_multinode-450140-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140-m03 sudo cat                                   | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m02_multinode-450140-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp testdata/cp-test.txt                                                | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile25711815/001/cp-test_multinode-450140-m03.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140:/home/docker/cp-test_multinode-450140-m03_multinode-450140.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140 sudo cat                                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m03_multinode-450140.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02:/home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140-m02 sudo cat                                   | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-450140 node stop m03                                                          | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	| node    | multinode-450140 node start                                                             | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:43 UTC |                     |
	| stop    | -p multinode-450140                                                                     | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:43 UTC |                     |
	| start   | -p multinode-450140                                                                     | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:45 UTC | 28 Oct 24 11:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:45:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:45:13.327600  169037 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:45:13.327711  169037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:45:13.327719  169037 out.go:358] Setting ErrFile to fd 2...
	I1028 11:45:13.327725  169037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:45:13.327919  169037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:45:13.328519  169037 out.go:352] Setting JSON to false
	I1028 11:45:13.329494  169037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5256,"bootTime":1730110657,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:45:13.329631  169037 start.go:139] virtualization: kvm guest
	I1028 11:45:13.332142  169037 out.go:177] * [multinode-450140] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:45:13.333719  169037 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:45:13.333775  169037 notify.go:220] Checking for updates...
	I1028 11:45:13.336799  169037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:45:13.338268  169037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:45:13.339797  169037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:45:13.341113  169037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:45:13.342388  169037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:45:13.344197  169037 config.go:182] Loaded profile config "multinode-450140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:45:13.344325  169037 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:45:13.345015  169037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:45:13.345109  169037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:45:13.361054  169037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I1028 11:45:13.361612  169037 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:45:13.362191  169037 main.go:141] libmachine: Using API Version  1
	I1028 11:45:13.362251  169037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:45:13.362650  169037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:45:13.362945  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:45:13.400404  169037 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:45:13.401717  169037 start.go:297] selected driver: kvm2
	I1028 11:45:13.401736  169037 start.go:901] validating driver "kvm2" against &{Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:45:13.401889  169037 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:45:13.402253  169037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:45:13.402332  169037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:45:13.418818  169037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:45:13.419550  169037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:45:13.419585  169037 cni.go:84] Creating CNI manager for ""
	I1028 11:45:13.419640  169037 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 11:45:13.419702  169037 start.go:340] cluster config:
	{Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-450140 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:45:13.419835  169037 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:45:13.423185  169037 out.go:177] * Starting "multinode-450140" primary control-plane node in "multinode-450140" cluster
	I1028 11:45:13.424681  169037 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:45:13.424730  169037 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:45:13.424741  169037 cache.go:56] Caching tarball of preloaded images
	I1028 11:45:13.424844  169037 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:45:13.424858  169037 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:45:13.424969  169037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/config.json ...
	I1028 11:45:13.425171  169037 start.go:360] acquireMachinesLock for multinode-450140: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:45:13.425219  169037 start.go:364] duration metric: took 27.139µs to acquireMachinesLock for "multinode-450140"
	I1028 11:45:13.425240  169037 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:45:13.425248  169037 fix.go:54] fixHost starting: 
	I1028 11:45:13.425499  169037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:45:13.425547  169037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:45:13.441286  169037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I1028 11:45:13.441785  169037 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:45:13.442272  169037 main.go:141] libmachine: Using API Version  1
	I1028 11:45:13.442295  169037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:45:13.442576  169037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:45:13.442757  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:45:13.442888  169037 main.go:141] libmachine: (multinode-450140) Calling .GetState
	I1028 11:45:13.444501  169037 fix.go:112] recreateIfNeeded on multinode-450140: state=Running err=<nil>
	W1028 11:45:13.444521  169037 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:45:13.446590  169037 out.go:177] * Updating the running kvm2 "multinode-450140" VM ...
	I1028 11:45:13.448117  169037 machine.go:93] provisionDockerMachine start ...
	I1028 11:45:13.448135  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:45:13.448333  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.451048  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.451515  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.451540  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.451657  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.451835  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.452014  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.452173  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.452375  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:13.452607  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:13.452621  169037 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:45:13.555199  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-450140
	
	I1028 11:45:13.555233  169037 main.go:141] libmachine: (multinode-450140) Calling .GetMachineName
	I1028 11:45:13.555494  169037 buildroot.go:166] provisioning hostname "multinode-450140"
	I1028 11:45:13.555518  169037 main.go:141] libmachine: (multinode-450140) Calling .GetMachineName
	I1028 11:45:13.555701  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.558726  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.559020  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.559046  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.559259  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.559577  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.559777  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.559951  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.560153  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:13.560354  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:13.560369  169037 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-450140 && echo "multinode-450140" | sudo tee /etc/hostname
	I1028 11:45:13.683996  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-450140
	
	I1028 11:45:13.684031  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.687391  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.687961  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.687993  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.688173  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.688405  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.688599  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.688750  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.688913  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:13.689132  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:13.689152  169037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-450140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-450140/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-450140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:45:13.790915  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:45:13.790948  169037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:45:13.790993  169037 buildroot.go:174] setting up certificates
	I1028 11:45:13.791005  169037 provision.go:84] configureAuth start
	I1028 11:45:13.791021  169037 main.go:141] libmachine: (multinode-450140) Calling .GetMachineName
	I1028 11:45:13.791338  169037 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:45:13.794136  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.794519  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.794540  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.794720  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.796958  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.797388  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.797424  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.797628  169037 provision.go:143] copyHostCerts
	I1028 11:45:13.797658  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:45:13.797704  169037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:45:13.797717  169037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:45:13.797788  169037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:45:13.797878  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:45:13.797896  169037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:45:13.797902  169037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:45:13.797926  169037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:45:13.797985  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:45:13.798008  169037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:45:13.798012  169037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:45:13.798033  169037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:45:13.798097  169037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.multinode-450140 san=[127.0.0.1 192.168.39.184 localhost minikube multinode-450140]
	I1028 11:45:13.950957  169037 provision.go:177] copyRemoteCerts
	I1028 11:45:13.951021  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:45:13.951045  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.954061  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.954465  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.954489  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.954648  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.954829  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.955034  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.955173  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:45:14.037088  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:45:14.037172  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 11:45:14.064480  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:45:14.064555  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:45:14.089739  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:45:14.089823  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:45:14.116166  169037 provision.go:87] duration metric: took 325.144551ms to configureAuth
	I1028 11:45:14.116198  169037 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:45:14.116439  169037 config.go:182] Loaded profile config "multinode-450140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:45:14.116527  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:14.119466  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:14.119842  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:14.119886  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:14.120046  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:14.120226  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:14.120385  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:14.120525  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:14.120684  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:14.120862  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:14.120881  169037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:46:44.852498  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:46:44.852532  169037 machine.go:96] duration metric: took 1m31.404402849s to provisionDockerMachine
	I1028 11:46:44.852549  169037 start.go:293] postStartSetup for "multinode-450140" (driver="kvm2")
	I1028 11:46:44.852566  169037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:46:44.852592  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:44.852962  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:46:44.852998  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:44.856491  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.856939  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:44.856958  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.857173  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:44.857381  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:44.857551  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:44.857713  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:46:44.937837  169037 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:46:44.942421  169037 command_runner.go:130] > NAME=Buildroot
	I1028 11:46:44.942444  169037 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 11:46:44.942450  169037 command_runner.go:130] > ID=buildroot
	I1028 11:46:44.942457  169037 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 11:46:44.942464  169037 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 11:46:44.942499  169037 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:46:44.942515  169037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:46:44.942589  169037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:46:44.942710  169037 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:46:44.942728  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:46:44.942885  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:46:44.952524  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:46:44.978535  169037 start.go:296] duration metric: took 125.963227ms for postStartSetup
	I1028 11:46:44.978580  169037 fix.go:56] duration metric: took 1m31.55333138s for fixHost
	I1028 11:46:44.978598  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:44.981553  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.981937  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:44.981967  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.982159  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:44.982347  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:44.982490  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:44.982650  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:44.982790  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:46:44.983015  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:46:44.983028  169037 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:46:45.083362  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116005.061035943
	
	I1028 11:46:45.083389  169037 fix.go:216] guest clock: 1730116005.061035943
	I1028 11:46:45.083400  169037 fix.go:229] Guest: 2024-10-28 11:46:45.061035943 +0000 UTC Remote: 2024-10-28 11:46:44.978583662 +0000 UTC m=+91.692702822 (delta=82.452281ms)
	I1028 11:46:45.083428  169037 fix.go:200] guest clock delta is within tolerance: 82.452281ms
	I1028 11:46:45.083442  169037 start.go:83] releasing machines lock for "multinode-450140", held for 1m31.658210704s
	I1028 11:46:45.083471  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.083715  169037 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:46:45.086620  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.087001  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:45.087041  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.087214  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.087731  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.087898  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.088011  169037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:46:45.088054  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:45.088098  169037 ssh_runner.go:195] Run: cat /version.json
	I1028 11:46:45.088127  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:45.090595  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.090916  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.090981  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:45.091001  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.091181  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:45.091370  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:45.091444  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:45.091470  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.091516  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:45.091626  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:45.091695  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:46:45.091779  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:45.091881  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:45.092030  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:46:45.192437  169037 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1028 11:46:45.193418  169037 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 11:46:45.193589  169037 ssh_runner.go:195] Run: systemctl --version
	I1028 11:46:45.199640  169037 command_runner.go:130] > systemd 252 (252)
	I1028 11:46:45.199685  169037 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 11:46:45.199738  169037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:46:45.357469  169037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:46:45.366728  169037 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 11:46:45.366776  169037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:46:45.366822  169037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:46:45.377495  169037 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:46:45.377519  169037 start.go:495] detecting cgroup driver to use...
	I1028 11:46:45.377587  169037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:46:45.395950  169037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:46:45.410707  169037 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:46:45.410758  169037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:46:45.426056  169037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:46:45.440858  169037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:46:45.597856  169037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:46:45.745949  169037 docker.go:233] disabling docker service ...
	I1028 11:46:45.746028  169037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:46:45.768547  169037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:46:45.785660  169037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:46:45.936848  169037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:46:46.087764  169037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:46:46.102753  169037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:46:46.123060  169037 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1028 11:46:46.123567  169037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:46:46.123627  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.134754  169037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:46:46.134823  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.145652  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.156123  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.166855  169037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:46:46.178020  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.189033  169037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.201263  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.212413  169037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:46:46.222573  169037 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 11:46:46.222678  169037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:46:46.232467  169037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:46:46.368883  169037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:46:46.570557  169037 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:46:46.570621  169037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:46:46.576450  169037 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1028 11:46:46.576478  169037 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 11:46:46.576487  169037 command_runner.go:130] > Device: 0,22	Inode: 1259        Links: 1
	I1028 11:46:46.576496  169037 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:46:46.576503  169037 command_runner.go:130] > Access: 2024-10-28 11:46:46.438523975 +0000
	I1028 11:46:46.576511  169037 command_runner.go:130] > Modify: 2024-10-28 11:46:46.438523975 +0000
	I1028 11:46:46.576517  169037 command_runner.go:130] > Change: 2024-10-28 11:46:46.438523975 +0000
	I1028 11:46:46.576522  169037 command_runner.go:130] >  Birth: -
	I1028 11:46:46.576564  169037 start.go:563] Will wait 60s for crictl version
	I1028 11:46:46.576622  169037 ssh_runner.go:195] Run: which crictl
	I1028 11:46:46.580882  169037 command_runner.go:130] > /usr/bin/crictl
	I1028 11:46:46.581035  169037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:46:46.623175  169037 command_runner.go:130] > Version:  0.1.0
	I1028 11:46:46.623208  169037 command_runner.go:130] > RuntimeName:  cri-o
	I1028 11:46:46.623215  169037 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1028 11:46:46.623223  169037 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 11:46:46.623309  169037 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:46:46.623391  169037 ssh_runner.go:195] Run: crio --version
	I1028 11:46:46.653549  169037 command_runner.go:130] > crio version 1.29.1
	I1028 11:46:46.653570  169037 command_runner.go:130] > Version:        1.29.1
	I1028 11:46:46.653576  169037 command_runner.go:130] > GitCommit:      unknown
	I1028 11:46:46.653581  169037 command_runner.go:130] > GitCommitDate:  unknown
	I1028 11:46:46.653585  169037 command_runner.go:130] > GitTreeState:   clean
	I1028 11:46:46.653590  169037 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 11:46:46.653594  169037 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 11:46:46.653598  169037 command_runner.go:130] > Compiler:       gc
	I1028 11:46:46.653602  169037 command_runner.go:130] > Platform:       linux/amd64
	I1028 11:46:46.653606  169037 command_runner.go:130] > Linkmode:       dynamic
	I1028 11:46:46.653610  169037 command_runner.go:130] > BuildTags:      
	I1028 11:46:46.653615  169037 command_runner.go:130] >   containers_image_ostree_stub
	I1028 11:46:46.653619  169037 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 11:46:46.653623  169037 command_runner.go:130] >   btrfs_noversion
	I1028 11:46:46.653662  169037 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 11:46:46.653679  169037 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 11:46:46.653684  169037 command_runner.go:130] >   seccomp
	I1028 11:46:46.653687  169037 command_runner.go:130] > LDFlags:          unknown
	I1028 11:46:46.653701  169037 command_runner.go:130] > SeccompEnabled:   true
	I1028 11:46:46.653708  169037 command_runner.go:130] > AppArmorEnabled:  false
	I1028 11:46:46.653811  169037 ssh_runner.go:195] Run: crio --version
	I1028 11:46:46.684952  169037 command_runner.go:130] > crio version 1.29.1
	I1028 11:46:46.684998  169037 command_runner.go:130] > Version:        1.29.1
	I1028 11:46:46.685009  169037 command_runner.go:130] > GitCommit:      unknown
	I1028 11:46:46.685014  169037 command_runner.go:130] > GitCommitDate:  unknown
	I1028 11:46:46.685020  169037 command_runner.go:130] > GitTreeState:   clean
	I1028 11:46:46.685028  169037 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 11:46:46.685035  169037 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 11:46:46.685042  169037 command_runner.go:130] > Compiler:       gc
	I1028 11:46:46.685062  169037 command_runner.go:130] > Platform:       linux/amd64
	I1028 11:46:46.685070  169037 command_runner.go:130] > Linkmode:       dynamic
	I1028 11:46:46.685082  169037 command_runner.go:130] > BuildTags:      
	I1028 11:46:46.685090  169037 command_runner.go:130] >   containers_image_ostree_stub
	I1028 11:46:46.685099  169037 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 11:46:46.685106  169037 command_runner.go:130] >   btrfs_noversion
	I1028 11:46:46.685115  169037 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 11:46:46.685123  169037 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 11:46:46.685132  169037 command_runner.go:130] >   seccomp
	I1028 11:46:46.685141  169037 command_runner.go:130] > LDFlags:          unknown
	I1028 11:46:46.685151  169037 command_runner.go:130] > SeccompEnabled:   true
	I1028 11:46:46.685158  169037 command_runner.go:130] > AppArmorEnabled:  false
	I1028 11:46:46.688680  169037 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:46:46.690363  169037 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:46:46.692879  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:46.693251  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:46.693276  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:46.693455  169037 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:46:46.698318  169037 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1028 11:46:46.698435  169037 kubeadm.go:883] updating cluster {Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:46:46.698579  169037 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:46:46.698626  169037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:46:46.743579  169037 command_runner.go:130] > {
	I1028 11:46:46.743600  169037 command_runner.go:130] >   "images": [
	I1028 11:46:46.743605  169037 command_runner.go:130] >     {
	I1028 11:46:46.743613  169037 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 11:46:46.743617  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743623  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 11:46:46.743626  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743630  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743639  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 11:46:46.743646  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 11:46:46.743650  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743655  169037 command_runner.go:130] >       "size": "94965812",
	I1028 11:46:46.743659  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743666  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.743672  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743677  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743681  169037 command_runner.go:130] >     },
	I1028 11:46:46.743686  169037 command_runner.go:130] >     {
	I1028 11:46:46.743692  169037 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 11:46:46.743698  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743704  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 11:46:46.743707  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743711  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743719  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 11:46:46.743726  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 11:46:46.743732  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743736  169037 command_runner.go:130] >       "size": "1363676",
	I1028 11:46:46.743742  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743750  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.743754  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743758  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743764  169037 command_runner.go:130] >     },
	I1028 11:46:46.743768  169037 command_runner.go:130] >     {
	I1028 11:46:46.743774  169037 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 11:46:46.743780  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743785  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 11:46:46.743789  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743792  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743800  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 11:46:46.743809  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 11:46:46.743813  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743819  169037 command_runner.go:130] >       "size": "31470524",
	I1028 11:46:46.743823  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743829  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.743833  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743840  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743844  169037 command_runner.go:130] >     },
	I1028 11:46:46.743850  169037 command_runner.go:130] >     {
	I1028 11:46:46.743856  169037 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 11:46:46.743862  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743867  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 11:46:46.743873  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743877  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743886  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 11:46:46.743899  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 11:46:46.743905  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743909  169037 command_runner.go:130] >       "size": "63273227",
	I1028 11:46:46.743915  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743919  169037 command_runner.go:130] >       "username": "nonroot",
	I1028 11:46:46.743925  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743937  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743943  169037 command_runner.go:130] >     },
	I1028 11:46:46.743947  169037 command_runner.go:130] >     {
	I1028 11:46:46.743955  169037 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 11:46:46.743959  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743964  169037 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 11:46:46.743968  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743972  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743981  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 11:46:46.743989  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 11:46:46.743995  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743999  169037 command_runner.go:130] >       "size": "149009664",
	I1028 11:46:46.744005  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744009  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744015  169037 command_runner.go:130] >       },
	I1028 11:46:46.744018  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744034  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744041  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744045  169037 command_runner.go:130] >     },
	I1028 11:46:46.744050  169037 command_runner.go:130] >     {
	I1028 11:46:46.744056  169037 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 11:46:46.744063  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744068  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 11:46:46.744073  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744077  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744086  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 11:46:46.744096  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 11:46:46.744102  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744106  169037 command_runner.go:130] >       "size": "95274464",
	I1028 11:46:46.744112  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744116  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744122  169037 command_runner.go:130] >       },
	I1028 11:46:46.744126  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744133  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744137  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744143  169037 command_runner.go:130] >     },
	I1028 11:46:46.744147  169037 command_runner.go:130] >     {
	I1028 11:46:46.744155  169037 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 11:46:46.744161  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744167  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 11:46:46.744172  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744176  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744186  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 11:46:46.744195  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 11:46:46.744199  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744206  169037 command_runner.go:130] >       "size": "89474374",
	I1028 11:46:46.744210  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744213  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744219  169037 command_runner.go:130] >       },
	I1028 11:46:46.744223  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744229  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744233  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744239  169037 command_runner.go:130] >     },
	I1028 11:46:46.744242  169037 command_runner.go:130] >     {
	I1028 11:46:46.744250  169037 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 11:46:46.744254  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744259  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 11:46:46.744266  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744269  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744285  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 11:46:46.744294  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 11:46:46.744298  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744304  169037 command_runner.go:130] >       "size": "92783513",
	I1028 11:46:46.744308  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.744312  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744315  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744320  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744324  169037 command_runner.go:130] >     },
	I1028 11:46:46.744327  169037 command_runner.go:130] >     {
	I1028 11:46:46.744332  169037 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 11:46:46.744336  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744340  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 11:46:46.744344  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744347  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744354  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 11:46:46.744361  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 11:46:46.744364  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744368  169037 command_runner.go:130] >       "size": "68457798",
	I1028 11:46:46.744371  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744374  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744377  169037 command_runner.go:130] >       },
	I1028 11:46:46.744381  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744384  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744388  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744390  169037 command_runner.go:130] >     },
	I1028 11:46:46.744394  169037 command_runner.go:130] >     {
	I1028 11:46:46.744399  169037 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 11:46:46.744403  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744407  169037 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 11:46:46.744413  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744417  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744425  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 11:46:46.744434  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 11:46:46.744440  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744444  169037 command_runner.go:130] >       "size": "742080",
	I1028 11:46:46.744450  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744454  169037 command_runner.go:130] >         "value": "65535"
	I1028 11:46:46.744460  169037 command_runner.go:130] >       },
	I1028 11:46:46.744464  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744471  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744474  169037 command_runner.go:130] >       "pinned": true
	I1028 11:46:46.744481  169037 command_runner.go:130] >     }
	I1028 11:46:46.744484  169037 command_runner.go:130] >   ]
	I1028 11:46:46.744490  169037 command_runner.go:130] > }
	I1028 11:46:46.745075  169037 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:46:46.745095  169037 crio.go:433] Images already preloaded, skipping extraction
	I1028 11:46:46.745143  169037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:46:46.778942  169037 command_runner.go:130] > {
	I1028 11:46:46.778966  169037 command_runner.go:130] >   "images": [
	I1028 11:46:46.778974  169037 command_runner.go:130] >     {
	I1028 11:46:46.778981  169037 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 11:46:46.778986  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.778992  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 11:46:46.778996  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779000  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779011  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 11:46:46.779021  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 11:46:46.779024  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779029  169037 command_runner.go:130] >       "size": "94965812",
	I1028 11:46:46.779033  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779041  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779052  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779057  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779061  169037 command_runner.go:130] >     },
	I1028 11:46:46.779064  169037 command_runner.go:130] >     {
	I1028 11:46:46.779070  169037 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 11:46:46.779075  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779080  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 11:46:46.779083  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779087  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779094  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 11:46:46.779101  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 11:46:46.779105  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779109  169037 command_runner.go:130] >       "size": "1363676",
	I1028 11:46:46.779113  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779152  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779161  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779165  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779168  169037 command_runner.go:130] >     },
	I1028 11:46:46.779171  169037 command_runner.go:130] >     {
	I1028 11:46:46.779177  169037 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 11:46:46.779181  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779186  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 11:46:46.779191  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779195  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779202  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 11:46:46.779210  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 11:46:46.779215  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779223  169037 command_runner.go:130] >       "size": "31470524",
	I1028 11:46:46.779227  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779230  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779234  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779243  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779247  169037 command_runner.go:130] >     },
	I1028 11:46:46.779259  169037 command_runner.go:130] >     {
	I1028 11:46:46.779268  169037 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 11:46:46.779273  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779278  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 11:46:46.779282  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779286  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779294  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 11:46:46.779305  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 11:46:46.779308  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779312  169037 command_runner.go:130] >       "size": "63273227",
	I1028 11:46:46.779317  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779325  169037 command_runner.go:130] >       "username": "nonroot",
	I1028 11:46:46.779331  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779335  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779340  169037 command_runner.go:130] >     },
	I1028 11:46:46.779344  169037 command_runner.go:130] >     {
	I1028 11:46:46.779350  169037 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 11:46:46.779355  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779360  169037 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 11:46:46.779364  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779367  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779374  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 11:46:46.779383  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 11:46:46.779387  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779392  169037 command_runner.go:130] >       "size": "149009664",
	I1028 11:46:46.779395  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779399  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779403  169037 command_runner.go:130] >       },
	I1028 11:46:46.779407  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779410  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779415  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779419  169037 command_runner.go:130] >     },
	I1028 11:46:46.779422  169037 command_runner.go:130] >     {
	I1028 11:46:46.779431  169037 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 11:46:46.779435  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779440  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 11:46:46.779446  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779451  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779460  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 11:46:46.779469  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 11:46:46.779475  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779479  169037 command_runner.go:130] >       "size": "95274464",
	I1028 11:46:46.779485  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779490  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779495  169037 command_runner.go:130] >       },
	I1028 11:46:46.779500  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779506  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779512  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779518  169037 command_runner.go:130] >     },
	I1028 11:46:46.779521  169037 command_runner.go:130] >     {
	I1028 11:46:46.779527  169037 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 11:46:46.779533  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779538  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 11:46:46.779541  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779545  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779553  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 11:46:46.779562  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 11:46:46.779568  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779575  169037 command_runner.go:130] >       "size": "89474374",
	I1028 11:46:46.779579  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779585  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779589  169037 command_runner.go:130] >       },
	I1028 11:46:46.779595  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779599  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779605  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779608  169037 command_runner.go:130] >     },
	I1028 11:46:46.779614  169037 command_runner.go:130] >     {
	I1028 11:46:46.779620  169037 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 11:46:46.779627  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779632  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 11:46:46.779638  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779642  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779658  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 11:46:46.779667  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 11:46:46.779671  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779677  169037 command_runner.go:130] >       "size": "92783513",
	I1028 11:46:46.779681  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779687  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779691  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779698  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779702  169037 command_runner.go:130] >     },
	I1028 11:46:46.779708  169037 command_runner.go:130] >     {
	I1028 11:46:46.779714  169037 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 11:46:46.779721  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779726  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 11:46:46.779732  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779736  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779745  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 11:46:46.779754  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 11:46:46.779760  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779764  169037 command_runner.go:130] >       "size": "68457798",
	I1028 11:46:46.779770  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779774  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779780  169037 command_runner.go:130] >       },
	I1028 11:46:46.779785  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779791  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779795  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779800  169037 command_runner.go:130] >     },
	I1028 11:46:46.779804  169037 command_runner.go:130] >     {
	I1028 11:46:46.779812  169037 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 11:46:46.779818  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779823  169037 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 11:46:46.779829  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779833  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779842  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 11:46:46.779854  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 11:46:46.779860  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779864  169037 command_runner.go:130] >       "size": "742080",
	I1028 11:46:46.779867  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779874  169037 command_runner.go:130] >         "value": "65535"
	I1028 11:46:46.779877  169037 command_runner.go:130] >       },
	I1028 11:46:46.779883  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779887  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779893  169037 command_runner.go:130] >       "pinned": true
	I1028 11:46:46.779897  169037 command_runner.go:130] >     }
	I1028 11:46:46.779902  169037 command_runner.go:130] >   ]
	I1028 11:46:46.779905  169037 command_runner.go:130] > }
	I1028 11:46:46.780519  169037 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:46:46.780538  169037 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:46:46.780546  169037 kubeadm.go:934] updating node { 192.168.39.184 8443 v1.31.2 crio true true} ...
	I1028 11:46:46.780668  169037 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-450140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:46:46.780767  169037 ssh_runner.go:195] Run: crio config
	I1028 11:46:46.815684  169037 command_runner.go:130] ! time="2024-10-28 11:46:46.793481024Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1028 11:46:46.826895  169037 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1028 11:46:46.838329  169037 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1028 11:46:46.838353  169037 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1028 11:46:46.838360  169037 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1028 11:46:46.838364  169037 command_runner.go:130] > #
	I1028 11:46:46.838387  169037 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1028 11:46:46.838395  169037 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1028 11:46:46.838402  169037 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1028 11:46:46.838409  169037 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1028 11:46:46.838413  169037 command_runner.go:130] > # reload'.
	I1028 11:46:46.838419  169037 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1028 11:46:46.838425  169037 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1028 11:46:46.838431  169037 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1028 11:46:46.838436  169037 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1028 11:46:46.838448  169037 command_runner.go:130] > [crio]
	I1028 11:46:46.838453  169037 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1028 11:46:46.838461  169037 command_runner.go:130] > # containers images, in this directory.
	I1028 11:46:46.838466  169037 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1028 11:46:46.838477  169037 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1028 11:46:46.838484  169037 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1028 11:46:46.838491  169037 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1028 11:46:46.838498  169037 command_runner.go:130] > # imagestore = ""
	I1028 11:46:46.838504  169037 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1028 11:46:46.838509  169037 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1028 11:46:46.838516  169037 command_runner.go:130] > storage_driver = "overlay"
	I1028 11:46:46.838521  169037 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1028 11:46:46.838527  169037 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1028 11:46:46.838531  169037 command_runner.go:130] > storage_option = [
	I1028 11:46:46.838536  169037 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1028 11:46:46.838545  169037 command_runner.go:130] > ]
	I1028 11:46:46.838554  169037 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1028 11:46:46.838560  169037 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1028 11:46:46.838567  169037 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1028 11:46:46.838572  169037 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1028 11:46:46.838580  169037 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1028 11:46:46.838587  169037 command_runner.go:130] > # always happen on a node reboot
	I1028 11:46:46.838592  169037 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1028 11:46:46.838606  169037 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1028 11:46:46.838615  169037 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1028 11:46:46.838622  169037 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1028 11:46:46.838627  169037 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1028 11:46:46.838637  169037 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1028 11:46:46.838647  169037 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1028 11:46:46.838654  169037 command_runner.go:130] > # internal_wipe = true
	I1028 11:46:46.838662  169037 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1028 11:46:46.838669  169037 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1028 11:46:46.838673  169037 command_runner.go:130] > # internal_repair = false
	I1028 11:46:46.838681  169037 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1028 11:46:46.838687  169037 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1028 11:46:46.838694  169037 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1028 11:46:46.838699  169037 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1028 11:46:46.838710  169037 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1028 11:46:46.838716  169037 command_runner.go:130] > [crio.api]
	I1028 11:46:46.838721  169037 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1028 11:46:46.838728  169037 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1028 11:46:46.838733  169037 command_runner.go:130] > # IP address on which the stream server will listen.
	I1028 11:46:46.838740  169037 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1028 11:46:46.838746  169037 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1028 11:46:46.838754  169037 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1028 11:46:46.838757  169037 command_runner.go:130] > # stream_port = "0"
	I1028 11:46:46.838763  169037 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1028 11:46:46.838770  169037 command_runner.go:130] > # stream_enable_tls = false
	I1028 11:46:46.838780  169037 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1028 11:46:46.838787  169037 command_runner.go:130] > # stream_idle_timeout = ""
	I1028 11:46:46.838793  169037 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1028 11:46:46.838802  169037 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1028 11:46:46.838808  169037 command_runner.go:130] > # minutes.
	I1028 11:46:46.838812  169037 command_runner.go:130] > # stream_tls_cert = ""
	I1028 11:46:46.838819  169037 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1028 11:46:46.838828  169037 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1028 11:46:46.838833  169037 command_runner.go:130] > # stream_tls_key = ""
	I1028 11:46:46.838841  169037 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1028 11:46:46.838847  169037 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1028 11:46:46.838868  169037 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1028 11:46:46.838874  169037 command_runner.go:130] > # stream_tls_ca = ""
	I1028 11:46:46.838881  169037 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 11:46:46.838885  169037 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1028 11:46:46.838893  169037 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 11:46:46.838899  169037 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1028 11:46:46.838906  169037 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1028 11:46:46.838913  169037 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1028 11:46:46.838919  169037 command_runner.go:130] > [crio.runtime]
	I1028 11:46:46.838925  169037 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1028 11:46:46.838933  169037 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1028 11:46:46.838937  169037 command_runner.go:130] > # "nofile=1024:2048"
	I1028 11:46:46.838942  169037 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1028 11:46:46.838947  169037 command_runner.go:130] > # default_ulimits = [
	I1028 11:46:46.838950  169037 command_runner.go:130] > # ]
	I1028 11:46:46.838956  169037 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1028 11:46:46.838964  169037 command_runner.go:130] > # no_pivot = false
	I1028 11:46:46.838970  169037 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1028 11:46:46.838980  169037 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1028 11:46:46.838985  169037 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1028 11:46:46.838990  169037 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1028 11:46:46.838998  169037 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1028 11:46:46.839010  169037 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 11:46:46.839017  169037 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1028 11:46:46.839021  169037 command_runner.go:130] > # Cgroup setting for conmon
	I1028 11:46:46.839028  169037 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1028 11:46:46.839034  169037 command_runner.go:130] > conmon_cgroup = "pod"
	I1028 11:46:46.839040  169037 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1028 11:46:46.839046  169037 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1028 11:46:46.839055  169037 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 11:46:46.839059  169037 command_runner.go:130] > conmon_env = [
	I1028 11:46:46.839067  169037 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 11:46:46.839070  169037 command_runner.go:130] > ]
	I1028 11:46:46.839078  169037 command_runner.go:130] > # Additional environment variables to set for all the
	I1028 11:46:46.839085  169037 command_runner.go:130] > # containers. These are overridden if set in the
	I1028 11:46:46.839091  169037 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1028 11:46:46.839097  169037 command_runner.go:130] > # default_env = [
	I1028 11:46:46.839101  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839110  169037 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1028 11:46:46.839119  169037 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1028 11:46:46.839125  169037 command_runner.go:130] > # selinux = false
	I1028 11:46:46.839131  169037 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1028 11:46:46.839139  169037 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1028 11:46:46.839147  169037 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1028 11:46:46.839151  169037 command_runner.go:130] > # seccomp_profile = ""
	I1028 11:46:46.839159  169037 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1028 11:46:46.839164  169037 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1028 11:46:46.839172  169037 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1028 11:46:46.839179  169037 command_runner.go:130] > # which might increase security.
	I1028 11:46:46.839183  169037 command_runner.go:130] > # This option is currently deprecated,
	I1028 11:46:46.839193  169037 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1028 11:46:46.839200  169037 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1028 11:46:46.839206  169037 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1028 11:46:46.839214  169037 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1028 11:46:46.839227  169037 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1028 11:46:46.839236  169037 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1028 11:46:46.839244  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.839251  169037 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1028 11:46:46.839257  169037 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1028 11:46:46.839264  169037 command_runner.go:130] > # the cgroup blockio controller.
	I1028 11:46:46.839268  169037 command_runner.go:130] > # blockio_config_file = ""
	I1028 11:46:46.839277  169037 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1028 11:46:46.839283  169037 command_runner.go:130] > # blockio parameters.
	I1028 11:46:46.839287  169037 command_runner.go:130] > # blockio_reload = false
	I1028 11:46:46.839296  169037 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1028 11:46:46.839300  169037 command_runner.go:130] > # irqbalance daemon.
	I1028 11:46:46.839305  169037 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1028 11:46:46.839313  169037 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1028 11:46:46.839322  169037 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1028 11:46:46.839329  169037 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1028 11:46:46.839337  169037 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1028 11:46:46.839346  169037 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1028 11:46:46.839353  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.839357  169037 command_runner.go:130] > # rdt_config_file = ""
	I1028 11:46:46.839365  169037 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1028 11:46:46.839371  169037 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1028 11:46:46.839394  169037 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1028 11:46:46.839402  169037 command_runner.go:130] > # separate_pull_cgroup = ""
	I1028 11:46:46.839408  169037 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1028 11:46:46.839416  169037 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1028 11:46:46.839423  169037 command_runner.go:130] > # will be added.
	I1028 11:46:46.839427  169037 command_runner.go:130] > # default_capabilities = [
	I1028 11:46:46.839433  169037 command_runner.go:130] > # 	"CHOWN",
	I1028 11:46:46.839437  169037 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1028 11:46:46.839443  169037 command_runner.go:130] > # 	"FSETID",
	I1028 11:46:46.839447  169037 command_runner.go:130] > # 	"FOWNER",
	I1028 11:46:46.839451  169037 command_runner.go:130] > # 	"SETGID",
	I1028 11:46:46.839455  169037 command_runner.go:130] > # 	"SETUID",
	I1028 11:46:46.839461  169037 command_runner.go:130] > # 	"SETPCAP",
	I1028 11:46:46.839465  169037 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1028 11:46:46.839470  169037 command_runner.go:130] > # 	"KILL",
	I1028 11:46:46.839473  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839483  169037 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1028 11:46:46.839492  169037 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1028 11:46:46.839499  169037 command_runner.go:130] > # add_inheritable_capabilities = false
	I1028 11:46:46.839508  169037 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1028 11:46:46.839515  169037 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 11:46:46.839521  169037 command_runner.go:130] > default_sysctls = [
	I1028 11:46:46.839526  169037 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1028 11:46:46.839531  169037 command_runner.go:130] > ]
	I1028 11:46:46.839536  169037 command_runner.go:130] > # List of devices on the host that a
	I1028 11:46:46.839544  169037 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1028 11:46:46.839550  169037 command_runner.go:130] > # allowed_devices = [
	I1028 11:46:46.839554  169037 command_runner.go:130] > # 	"/dev/fuse",
	I1028 11:46:46.839559  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839564  169037 command_runner.go:130] > # List of additional devices. specified as
	I1028 11:46:46.839573  169037 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1028 11:46:46.839580  169037 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1028 11:46:46.839586  169037 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 11:46:46.839592  169037 command_runner.go:130] > # additional_devices = [
	I1028 11:46:46.839595  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839601  169037 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1028 11:46:46.839607  169037 command_runner.go:130] > # cdi_spec_dirs = [
	I1028 11:46:46.839611  169037 command_runner.go:130] > # 	"/etc/cdi",
	I1028 11:46:46.839617  169037 command_runner.go:130] > # 	"/var/run/cdi",
	I1028 11:46:46.839621  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839627  169037 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1028 11:46:46.839635  169037 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1028 11:46:46.839642  169037 command_runner.go:130] > # Defaults to false.
	I1028 11:46:46.839648  169037 command_runner.go:130] > # device_ownership_from_security_context = false
	I1028 11:46:46.839656  169037 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1028 11:46:46.839665  169037 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1028 11:46:46.839670  169037 command_runner.go:130] > # hooks_dir = [
	I1028 11:46:46.839675  169037 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1028 11:46:46.839680  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839686  169037 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1028 11:46:46.839695  169037 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1028 11:46:46.839702  169037 command_runner.go:130] > # its default mounts from the following two files:
	I1028 11:46:46.839705  169037 command_runner.go:130] > #
	I1028 11:46:46.839713  169037 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1028 11:46:46.839722  169037 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1028 11:46:46.839730  169037 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1028 11:46:46.839733  169037 command_runner.go:130] > #
	I1028 11:46:46.839738  169037 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1028 11:46:46.839747  169037 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1028 11:46:46.839756  169037 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1028 11:46:46.839764  169037 command_runner.go:130] > #      only add mounts it finds in this file.
	I1028 11:46:46.839767  169037 command_runner.go:130] > #
	I1028 11:46:46.839771  169037 command_runner.go:130] > # default_mounts_file = ""
	I1028 11:46:46.839779  169037 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1028 11:46:46.839785  169037 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1028 11:46:46.839791  169037 command_runner.go:130] > pids_limit = 1024
	I1028 11:46:46.839797  169037 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1028 11:46:46.839805  169037 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1028 11:46:46.839813  169037 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1028 11:46:46.839821  169037 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1028 11:46:46.839827  169037 command_runner.go:130] > # log_size_max = -1
	I1028 11:46:46.839834  169037 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1028 11:46:46.839840  169037 command_runner.go:130] > # log_to_journald = false
	I1028 11:46:46.839846  169037 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1028 11:46:46.839853  169037 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1028 11:46:46.839858  169037 command_runner.go:130] > # Path to directory for container attach sockets.
	I1028 11:46:46.839865  169037 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1028 11:46:46.839871  169037 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1028 11:46:46.839878  169037 command_runner.go:130] > # bind_mount_prefix = ""
	I1028 11:46:46.839884  169037 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1028 11:46:46.839890  169037 command_runner.go:130] > # read_only = false
	I1028 11:46:46.839896  169037 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1028 11:46:46.839904  169037 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1028 11:46:46.839911  169037 command_runner.go:130] > # live configuration reload.
	I1028 11:46:46.839915  169037 command_runner.go:130] > # log_level = "info"
	I1028 11:46:46.839923  169037 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1028 11:46:46.839928  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.839935  169037 command_runner.go:130] > # log_filter = ""
	I1028 11:46:46.839940  169037 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1028 11:46:46.839949  169037 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1028 11:46:46.839953  169037 command_runner.go:130] > # separated by comma.
	I1028 11:46:46.839962  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.839966  169037 command_runner.go:130] > # uid_mappings = ""
	I1028 11:46:46.839976  169037 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1028 11:46:46.839988  169037 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1028 11:46:46.839995  169037 command_runner.go:130] > # separated by comma.
	I1028 11:46:46.840003  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.840011  169037 command_runner.go:130] > # gid_mappings = ""
	I1028 11:46:46.840018  169037 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1028 11:46:46.840025  169037 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 11:46:46.840031  169037 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 11:46:46.840038  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.840044  169037 command_runner.go:130] > # minimum_mappable_uid = -1
	I1028 11:46:46.840050  169037 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1028 11:46:46.840058  169037 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 11:46:46.840065  169037 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 11:46:46.840074  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.840081  169037 command_runner.go:130] > # minimum_mappable_gid = -1
	I1028 11:46:46.840087  169037 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1028 11:46:46.840095  169037 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1028 11:46:46.840103  169037 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1028 11:46:46.840110  169037 command_runner.go:130] > # ctr_stop_timeout = 30
	I1028 11:46:46.840116  169037 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1028 11:46:46.840123  169037 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1028 11:46:46.840130  169037 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1028 11:46:46.840135  169037 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1028 11:46:46.840142  169037 command_runner.go:130] > drop_infra_ctr = false
	I1028 11:46:46.840147  169037 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1028 11:46:46.840155  169037 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1028 11:46:46.840161  169037 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1028 11:46:46.840168  169037 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1028 11:46:46.840175  169037 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1028 11:46:46.840182  169037 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1028 11:46:46.840190  169037 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1028 11:46:46.840195  169037 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1028 11:46:46.840199  169037 command_runner.go:130] > # shared_cpuset = ""
	I1028 11:46:46.840207  169037 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1028 11:46:46.840212  169037 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1028 11:46:46.840218  169037 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1028 11:46:46.840229  169037 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1028 11:46:46.840235  169037 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1028 11:46:46.840241  169037 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1028 11:46:46.840251  169037 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1028 11:46:46.840258  169037 command_runner.go:130] > # enable_criu_support = false
	I1028 11:46:46.840263  169037 command_runner.go:130] > # Enable/disable the generation of the container,
	I1028 11:46:46.840272  169037 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1028 11:46:46.840279  169037 command_runner.go:130] > # enable_pod_events = false
	I1028 11:46:46.840285  169037 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 11:46:46.840293  169037 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 11:46:46.840299  169037 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1028 11:46:46.840303  169037 command_runner.go:130] > # default_runtime = "runc"
	I1028 11:46:46.840311  169037 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1028 11:46:46.840318  169037 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1028 11:46:46.840329  169037 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1028 11:46:46.840340  169037 command_runner.go:130] > # creation as a file is not desired either.
	I1028 11:46:46.840350  169037 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1028 11:46:46.840357  169037 command_runner.go:130] > # the hostname is being managed dynamically.
	I1028 11:46:46.840361  169037 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1028 11:46:46.840367  169037 command_runner.go:130] > # ]
	I1028 11:46:46.840373  169037 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1028 11:46:46.840381  169037 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1028 11:46:46.840388  169037 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1028 11:46:46.840395  169037 command_runner.go:130] > # Each entry in the table should follow the format:
	I1028 11:46:46.840397  169037 command_runner.go:130] > #
	I1028 11:46:46.840402  169037 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1028 11:46:46.840409  169037 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1028 11:46:46.840434  169037 command_runner.go:130] > # runtime_type = "oci"
	I1028 11:46:46.840441  169037 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1028 11:46:46.840446  169037 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1028 11:46:46.840450  169037 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1028 11:46:46.840454  169037 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1028 11:46:46.840458  169037 command_runner.go:130] > # monitor_env = []
	I1028 11:46:46.840465  169037 command_runner.go:130] > # privileged_without_host_devices = false
	I1028 11:46:46.840469  169037 command_runner.go:130] > # allowed_annotations = []
	I1028 11:46:46.840477  169037 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1028 11:46:46.840483  169037 command_runner.go:130] > # Where:
	I1028 11:46:46.840491  169037 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1028 11:46:46.840500  169037 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1028 11:46:46.840507  169037 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1028 11:46:46.840518  169037 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1028 11:46:46.840524  169037 command_runner.go:130] > #   in $PATH.
	I1028 11:46:46.840532  169037 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1028 11:46:46.840545  169037 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1028 11:46:46.840551  169037 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1028 11:46:46.840557  169037 command_runner.go:130] > #   state.
	I1028 11:46:46.840563  169037 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1028 11:46:46.840571  169037 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1028 11:46:46.840588  169037 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1028 11:46:46.840596  169037 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1028 11:46:46.840604  169037 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1028 11:46:46.840611  169037 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1028 11:46:46.840619  169037 command_runner.go:130] > #   The currently recognized values are:
	I1028 11:46:46.840625  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1028 11:46:46.840634  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1028 11:46:46.840640  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1028 11:46:46.840647  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1028 11:46:46.840654  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1028 11:46:46.840663  169037 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1028 11:46:46.840672  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1028 11:46:46.840680  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1028 11:46:46.840689  169037 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1028 11:46:46.840698  169037 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1028 11:46:46.840705  169037 command_runner.go:130] > #   deprecated option "conmon".
	I1028 11:46:46.840711  169037 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1028 11:46:46.840718  169037 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1028 11:46:46.840725  169037 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1028 11:46:46.840732  169037 command_runner.go:130] > #   should be moved to the container's cgroup
	I1028 11:46:46.840738  169037 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1028 11:46:46.840747  169037 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1028 11:46:46.840756  169037 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1028 11:46:46.840764  169037 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1028 11:46:46.840767  169037 command_runner.go:130] > #
	I1028 11:46:46.840774  169037 command_runner.go:130] > # Using the seccomp notifier feature:
	I1028 11:46:46.840780  169037 command_runner.go:130] > #
	I1028 11:46:46.840789  169037 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1028 11:46:46.840797  169037 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1028 11:46:46.840801  169037 command_runner.go:130] > #
	I1028 11:46:46.840807  169037 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1028 11:46:46.840813  169037 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1028 11:46:46.840819  169037 command_runner.go:130] > #
	I1028 11:46:46.840827  169037 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1028 11:46:46.840834  169037 command_runner.go:130] > # feature.
	I1028 11:46:46.840838  169037 command_runner.go:130] > #
	I1028 11:46:46.840845  169037 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1028 11:46:46.840851  169037 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1028 11:46:46.840859  169037 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1028 11:46:46.840867  169037 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1028 11:46:46.840876  169037 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1028 11:46:46.840879  169037 command_runner.go:130] > #
	I1028 11:46:46.840887  169037 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1028 11:46:46.840895  169037 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1028 11:46:46.840898  169037 command_runner.go:130] > #
	I1028 11:46:46.840906  169037 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1028 11:46:46.840912  169037 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1028 11:46:46.840917  169037 command_runner.go:130] > #
	I1028 11:46:46.840923  169037 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1028 11:46:46.840931  169037 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1028 11:46:46.840934  169037 command_runner.go:130] > # limitation.
	I1028 11:46:46.840940  169037 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1028 11:46:46.840945  169037 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1028 11:46:46.840949  169037 command_runner.go:130] > runtime_type = "oci"
	I1028 11:46:46.840953  169037 command_runner.go:130] > runtime_root = "/run/runc"
	I1028 11:46:46.840959  169037 command_runner.go:130] > runtime_config_path = ""
	I1028 11:46:46.840964  169037 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1028 11:46:46.840970  169037 command_runner.go:130] > monitor_cgroup = "pod"
	I1028 11:46:46.840974  169037 command_runner.go:130] > monitor_exec_cgroup = ""
	I1028 11:46:46.840978  169037 command_runner.go:130] > monitor_env = [
	I1028 11:46:46.840983  169037 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 11:46:46.840987  169037 command_runner.go:130] > ]
	I1028 11:46:46.840991  169037 command_runner.go:130] > privileged_without_host_devices = false
	I1028 11:46:46.841000  169037 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1028 11:46:46.841005  169037 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1028 11:46:46.841013  169037 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1028 11:46:46.841027  169037 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1028 11:46:46.841037  169037 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1028 11:46:46.841042  169037 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1028 11:46:46.841053  169037 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1028 11:46:46.841062  169037 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1028 11:46:46.841068  169037 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1028 11:46:46.841077  169037 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1028 11:46:46.841083  169037 command_runner.go:130] > # Example:
	I1028 11:46:46.841088  169037 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1028 11:46:46.841095  169037 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1028 11:46:46.841101  169037 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1028 11:46:46.841108  169037 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1028 11:46:46.841112  169037 command_runner.go:130] > # cpuset = 0
	I1028 11:46:46.841118  169037 command_runner.go:130] > # cpushares = "0-1"
	I1028 11:46:46.841121  169037 command_runner.go:130] > # Where:
	I1028 11:46:46.841127  169037 command_runner.go:130] > # The workload name is workload-type.
	I1028 11:46:46.841137  169037 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1028 11:46:46.841145  169037 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1028 11:46:46.841150  169037 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1028 11:46:46.841160  169037 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1028 11:46:46.841168  169037 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1028 11:46:46.841175  169037 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1028 11:46:46.841182  169037 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1028 11:46:46.841188  169037 command_runner.go:130] > # Default value is set to true
	I1028 11:46:46.841193  169037 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1028 11:46:46.841200  169037 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1028 11:46:46.841205  169037 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1028 11:46:46.841212  169037 command_runner.go:130] > # Default value is set to 'false'
	I1028 11:46:46.841216  169037 command_runner.go:130] > # disable_hostport_mapping = false
	I1028 11:46:46.841226  169037 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1028 11:46:46.841229  169037 command_runner.go:130] > #
	I1028 11:46:46.841235  169037 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1028 11:46:46.841241  169037 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1028 11:46:46.841247  169037 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1028 11:46:46.841253  169037 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1028 11:46:46.841261  169037 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1028 11:46:46.841264  169037 command_runner.go:130] > [crio.image]
	I1028 11:46:46.841270  169037 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1028 11:46:46.841273  169037 command_runner.go:130] > # default_transport = "docker://"
	I1028 11:46:46.841279  169037 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1028 11:46:46.841284  169037 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1028 11:46:46.841288  169037 command_runner.go:130] > # global_auth_file = ""
	I1028 11:46:46.841293  169037 command_runner.go:130] > # The image used to instantiate infra containers.
	I1028 11:46:46.841297  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.841302  169037 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1028 11:46:46.841308  169037 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1028 11:46:46.841313  169037 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1028 11:46:46.841317  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.841322  169037 command_runner.go:130] > # pause_image_auth_file = ""
	I1028 11:46:46.841327  169037 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1028 11:46:46.841333  169037 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1028 11:46:46.841339  169037 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1028 11:46:46.841344  169037 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1028 11:46:46.841348  169037 command_runner.go:130] > # pause_command = "/pause"
	I1028 11:46:46.841354  169037 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1028 11:46:46.841360  169037 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1028 11:46:46.841365  169037 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1028 11:46:46.841372  169037 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1028 11:46:46.841378  169037 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1028 11:46:46.841383  169037 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1028 11:46:46.841386  169037 command_runner.go:130] > # pinned_images = [
	I1028 11:46:46.841390  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841396  169037 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1028 11:46:46.841406  169037 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1028 11:46:46.841414  169037 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1028 11:46:46.841422  169037 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1028 11:46:46.841428  169037 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1028 11:46:46.841435  169037 command_runner.go:130] > # signature_policy = ""
	I1028 11:46:46.841440  169037 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1028 11:46:46.841449  169037 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1028 11:46:46.841457  169037 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1028 11:46:46.841468  169037 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1028 11:46:46.841474  169037 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1028 11:46:46.841481  169037 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1028 11:46:46.841488  169037 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1028 11:46:46.841496  169037 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1028 11:46:46.841502  169037 command_runner.go:130] > # changing them here.
	I1028 11:46:46.841506  169037 command_runner.go:130] > # insecure_registries = [
	I1028 11:46:46.841537  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841543  169037 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1028 11:46:46.841548  169037 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1028 11:46:46.841554  169037 command_runner.go:130] > # image_volumes = "mkdir"
	I1028 11:46:46.841559  169037 command_runner.go:130] > # Temporary directory to use for storing big files
	I1028 11:46:46.841565  169037 command_runner.go:130] > # big_files_temporary_dir = ""
	I1028 11:46:46.841571  169037 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1028 11:46:46.841577  169037 command_runner.go:130] > # CNI plugins.
	I1028 11:46:46.841581  169037 command_runner.go:130] > [crio.network]
	I1028 11:46:46.841589  169037 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1028 11:46:46.841596  169037 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1028 11:46:46.841600  169037 command_runner.go:130] > # cni_default_network = ""
	I1028 11:46:46.841606  169037 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1028 11:46:46.841613  169037 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1028 11:46:46.841618  169037 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1028 11:46:46.841624  169037 command_runner.go:130] > # plugin_dirs = [
	I1028 11:46:46.841628  169037 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1028 11:46:46.841632  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841640  169037 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1028 11:46:46.841644  169037 command_runner.go:130] > [crio.metrics]
	I1028 11:46:46.841651  169037 command_runner.go:130] > # Globally enable or disable metrics support.
	I1028 11:46:46.841656  169037 command_runner.go:130] > enable_metrics = true
	I1028 11:46:46.841662  169037 command_runner.go:130] > # Specify enabled metrics collectors.
	I1028 11:46:46.841667  169037 command_runner.go:130] > # Per default all metrics are enabled.
	I1028 11:46:46.841675  169037 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1028 11:46:46.841684  169037 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1028 11:46:46.841692  169037 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1028 11:46:46.841698  169037 command_runner.go:130] > # metrics_collectors = [
	I1028 11:46:46.841702  169037 command_runner.go:130] > # 	"operations",
	I1028 11:46:46.841712  169037 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1028 11:46:46.841716  169037 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1028 11:46:46.841721  169037 command_runner.go:130] > # 	"operations_errors",
	I1028 11:46:46.841725  169037 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1028 11:46:46.841730  169037 command_runner.go:130] > # 	"image_pulls_by_name",
	I1028 11:46:46.841734  169037 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1028 11:46:46.841743  169037 command_runner.go:130] > # 	"image_pulls_failures",
	I1028 11:46:46.841749  169037 command_runner.go:130] > # 	"image_pulls_successes",
	I1028 11:46:46.841754  169037 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1028 11:46:46.841760  169037 command_runner.go:130] > # 	"image_layer_reuse",
	I1028 11:46:46.841764  169037 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1028 11:46:46.841771  169037 command_runner.go:130] > # 	"containers_oom_total",
	I1028 11:46:46.841775  169037 command_runner.go:130] > # 	"containers_oom",
	I1028 11:46:46.841781  169037 command_runner.go:130] > # 	"processes_defunct",
	I1028 11:46:46.841785  169037 command_runner.go:130] > # 	"operations_total",
	I1028 11:46:46.841792  169037 command_runner.go:130] > # 	"operations_latency_seconds",
	I1028 11:46:46.841796  169037 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1028 11:46:46.841800  169037 command_runner.go:130] > # 	"operations_errors_total",
	I1028 11:46:46.841805  169037 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1028 11:46:46.841809  169037 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1028 11:46:46.841816  169037 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1028 11:46:46.841821  169037 command_runner.go:130] > # 	"image_pulls_success_total",
	I1028 11:46:46.841827  169037 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1028 11:46:46.841832  169037 command_runner.go:130] > # 	"containers_oom_count_total",
	I1028 11:46:46.841839  169037 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1028 11:46:46.841844  169037 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1028 11:46:46.841849  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841854  169037 command_runner.go:130] > # The port on which the metrics server will listen.
	I1028 11:46:46.841860  169037 command_runner.go:130] > # metrics_port = 9090
	I1028 11:46:46.841865  169037 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1028 11:46:46.841871  169037 command_runner.go:130] > # metrics_socket = ""
	I1028 11:46:46.841877  169037 command_runner.go:130] > # The certificate for the secure metrics server.
	I1028 11:46:46.841885  169037 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1028 11:46:46.841895  169037 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1028 11:46:46.841902  169037 command_runner.go:130] > # certificate on any modification event.
	I1028 11:46:46.841906  169037 command_runner.go:130] > # metrics_cert = ""
	I1028 11:46:46.841911  169037 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1028 11:46:46.841917  169037 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1028 11:46:46.841923  169037 command_runner.go:130] > # metrics_key = ""
	I1028 11:46:46.841929  169037 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1028 11:46:46.841935  169037 command_runner.go:130] > [crio.tracing]
	I1028 11:46:46.841940  169037 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1028 11:46:46.841944  169037 command_runner.go:130] > # enable_tracing = false
	I1028 11:46:46.841949  169037 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1028 11:46:46.841956  169037 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1028 11:46:46.841963  169037 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1028 11:46:46.841970  169037 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1028 11:46:46.841974  169037 command_runner.go:130] > # CRI-O NRI configuration.
	I1028 11:46:46.841977  169037 command_runner.go:130] > [crio.nri]
	I1028 11:46:46.841982  169037 command_runner.go:130] > # Globally enable or disable NRI.
	I1028 11:46:46.841986  169037 command_runner.go:130] > # enable_nri = false
	I1028 11:46:46.841992  169037 command_runner.go:130] > # NRI socket to listen on.
	I1028 11:46:46.841999  169037 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1028 11:46:46.842003  169037 command_runner.go:130] > # NRI plugin directory to use.
	I1028 11:46:46.842008  169037 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1028 11:46:46.842015  169037 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1028 11:46:46.842019  169037 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1028 11:46:46.842025  169037 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1028 11:46:46.842031  169037 command_runner.go:130] > # nri_disable_connections = false
	I1028 11:46:46.842036  169037 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1028 11:46:46.842042  169037 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1028 11:46:46.842047  169037 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1028 11:46:46.842059  169037 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1028 11:46:46.842064  169037 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1028 11:46:46.842070  169037 command_runner.go:130] > [crio.stats]
	I1028 11:46:46.842076  169037 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1028 11:46:46.842083  169037 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1028 11:46:46.842088  169037 command_runner.go:130] > # stats_collection_period = 0
	I1028 11:46:46.842187  169037 cni.go:84] Creating CNI manager for ""
	I1028 11:46:46.842196  169037 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 11:46:46.842207  169037 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:46:46.842233  169037 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-450140 NodeName:multinode-450140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:46:46.842367  169037 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-450140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.184"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:46:46.842431  169037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:46:46.853895  169037 command_runner.go:130] > kubeadm
	I1028 11:46:46.853916  169037 command_runner.go:130] > kubectl
	I1028 11:46:46.853920  169037 command_runner.go:130] > kubelet
	I1028 11:46:46.853941  169037 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:46:46.853989  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:46:46.864783  169037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 11:46:46.882757  169037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:46:46.900945  169037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1028 11:46:46.920394  169037 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I1028 11:46:46.924986  169037 command_runner.go:130] > 192.168.39.184	control-plane.minikube.internal
	I1028 11:46:46.925119  169037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:46:47.064741  169037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:46:47.080823  169037 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140 for IP: 192.168.39.184
	I1028 11:46:47.080853  169037 certs.go:194] generating shared ca certs ...
	I1028 11:46:47.080874  169037 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:46:47.081057  169037 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:46:47.081118  169037 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:46:47.081132  169037 certs.go:256] generating profile certs ...
	I1028 11:46:47.081239  169037 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/client.key
	I1028 11:46:47.081335  169037 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.key.cd51ceb4
	I1028 11:46:47.081376  169037 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.key
	I1028 11:46:47.081391  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:46:47.081404  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:46:47.081417  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:46:47.081432  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:46:47.081443  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:46:47.081455  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:46:47.081466  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:46:47.081477  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:46:47.081559  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:46:47.081604  169037 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:46:47.081617  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:46:47.081655  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:46:47.081686  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:46:47.081715  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:46:47.081756  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:46:47.081785  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.081799  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.081815  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.082441  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:46:47.108561  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:46:47.135124  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:46:47.160853  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:46:47.186431  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:46:47.212544  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:46:47.239675  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:46:47.264787  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:46:47.289804  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:46:47.315036  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:46:47.340038  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:46:47.364359  169037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:46:47.382352  169037 ssh_runner.go:195] Run: openssl version
	I1028 11:46:47.388634  169037 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 11:46:47.388710  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:46:47.400445  169037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.405458  169037 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.405781  169037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.405833  169037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.411890  169037 command_runner.go:130] > 3ec20f2e
	I1028 11:46:47.412051  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:46:47.422335  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:46:47.434480  169037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.439363  169037 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.439395  169037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.439444  169037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.445443  169037 command_runner.go:130] > b5213941
	I1028 11:46:47.445619  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:46:47.455825  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:46:47.467465  169037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.472447  169037 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.472494  169037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.472537  169037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.478647  169037 command_runner.go:130] > 51391683
	I1028 11:46:47.478804  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:46:47.488842  169037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:46:47.493678  169037 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:46:47.493717  169037 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 11:46:47.493726  169037 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I1028 11:46:47.493736  169037 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:46:47.493748  169037 command_runner.go:130] > Access: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493757  169037 command_runner.go:130] > Modify: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493767  169037 command_runner.go:130] > Change: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493777  169037 command_runner.go:130] >  Birth: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493832  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:46:47.499683  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.499811  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:46:47.505794  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.505858  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:46:47.512033  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.512099  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:46:47.517851  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.517999  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:46:47.523652  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.523813  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:46:47.530371  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.530452  169037 kubeadm.go:392] StartCluster: {Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:46:47.530563  169037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:46:47.530600  169037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:46:47.567757  169037 command_runner.go:130] > 9984aacc72a9ff0993332fc0aad6ce2c6f5615875c7c3792438749e989d62b02
	I1028 11:46:47.567781  169037 command_runner.go:130] > 17a75cb502bdea553a2767bbe692e8c3ea1adad72f5c831f79a3e46e8d1abb6a
	I1028 11:46:47.567788  169037 command_runner.go:130] > 47a5d72b6c318b157aa2347b37035c31d864480ae18f027cab73c5ad66b69df2
	I1028 11:46:47.567795  169037 command_runner.go:130] > df86ae076d7bd3d46e4426e3b61b4c3581afabc1cea0cf28388e7963c454b7f5
	I1028 11:46:47.567801  169037 command_runner.go:130] > 09898f5c3ea283707dff548f9f360641786b8042a2a30675090bb9d1f05f5742
	I1028 11:46:47.567806  169037 command_runner.go:130] > caf0607fae8fc41f7e25dc9d1aca76ed1f31891d71edf83d4357e4c4a17affd3
	I1028 11:46:47.567811  169037 command_runner.go:130] > 1aefc2add33bd17169acd9dc5d93f640dca78b9793c9c293a4ca02b16a433764
	I1028 11:46:47.567832  169037 command_runner.go:130] > 2163c6c718431b9cd8d8eb3c8370f2383b3ada331a0a8cbdff600c64220e975b
	I1028 11:46:47.569258  169037 cri.go:89] found id: "9984aacc72a9ff0993332fc0aad6ce2c6f5615875c7c3792438749e989d62b02"
	I1028 11:46:47.569274  169037 cri.go:89] found id: "17a75cb502bdea553a2767bbe692e8c3ea1adad72f5c831f79a3e46e8d1abb6a"
	I1028 11:46:47.569280  169037 cri.go:89] found id: "47a5d72b6c318b157aa2347b37035c31d864480ae18f027cab73c5ad66b69df2"
	I1028 11:46:47.569300  169037 cri.go:89] found id: "df86ae076d7bd3d46e4426e3b61b4c3581afabc1cea0cf28388e7963c454b7f5"
	I1028 11:46:47.569314  169037 cri.go:89] found id: "09898f5c3ea283707dff548f9f360641786b8042a2a30675090bb9d1f05f5742"
	I1028 11:46:47.569317  169037 cri.go:89] found id: "caf0607fae8fc41f7e25dc9d1aca76ed1f31891d71edf83d4357e4c4a17affd3"
	I1028 11:46:47.569319  169037 cri.go:89] found id: "1aefc2add33bd17169acd9dc5d93f640dca78b9793c9c293a4ca02b16a433764"
	I1028 11:46:47.569322  169037 cri.go:89] found id: "2163c6c718431b9cd8d8eb3c8370f2383b3ada331a0a8cbdff600c64220e975b"
	I1028 11:46:47.569325  169037 cri.go:89] found id: ""
	I1028 11:46:47.569363  169037 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-450140 -n multinode-450140
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-450140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (333.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 stop
E1028 11:50:09.887288  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-450140 stop: exit status 82 (2m0.488222589s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-450140-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-450140 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 status: (18.848866709s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr: (3.359547886s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-450140 -n multinode-450140
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 logs -n 25: (2.129838838s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140:/home/docker/cp-test_multinode-450140-m02_multinode-450140.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140 sudo cat                                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m02_multinode-450140.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03:/home/docker/cp-test_multinode-450140-m02_multinode-450140-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140-m03 sudo cat                                   | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m02_multinode-450140-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp testdata/cp-test.txt                                                | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile25711815/001/cp-test_multinode-450140-m03.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140:/home/docker/cp-test_multinode-450140-m03_multinode-450140.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140 sudo cat                                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m03_multinode-450140.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02:/home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140-m02 sudo cat                                   | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-450140 node stop m03                                                          | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	| node    | multinode-450140 node start                                                             | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:43 UTC |                     |
	| stop    | -p multinode-450140                                                                     | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:43 UTC |                     |
	| start   | -p multinode-450140                                                                     | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:45 UTC | 28 Oct 24 11:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC |                     |
	| node    | multinode-450140 node delete                                                            | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC | 28 Oct 24 11:48 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-450140 stop                                                                   | multinode-450140 | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:45:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:45:13.327600  169037 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:45:13.327711  169037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:45:13.327719  169037 out.go:358] Setting ErrFile to fd 2...
	I1028 11:45:13.327725  169037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:45:13.327919  169037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:45:13.328519  169037 out.go:352] Setting JSON to false
	I1028 11:45:13.329494  169037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5256,"bootTime":1730110657,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:45:13.329631  169037 start.go:139] virtualization: kvm guest
	I1028 11:45:13.332142  169037 out.go:177] * [multinode-450140] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:45:13.333719  169037 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:45:13.333775  169037 notify.go:220] Checking for updates...
	I1028 11:45:13.336799  169037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:45:13.338268  169037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:45:13.339797  169037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:45:13.341113  169037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:45:13.342388  169037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:45:13.344197  169037 config.go:182] Loaded profile config "multinode-450140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:45:13.344325  169037 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:45:13.345015  169037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:45:13.345109  169037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:45:13.361054  169037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I1028 11:45:13.361612  169037 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:45:13.362191  169037 main.go:141] libmachine: Using API Version  1
	I1028 11:45:13.362251  169037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:45:13.362650  169037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:45:13.362945  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:45:13.400404  169037 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:45:13.401717  169037 start.go:297] selected driver: kvm2
	I1028 11:45:13.401736  169037 start.go:901] validating driver "kvm2" against &{Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:45:13.401889  169037 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:45:13.402253  169037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:45:13.402332  169037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:45:13.418818  169037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:45:13.419550  169037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:45:13.419585  169037 cni.go:84] Creating CNI manager for ""
	I1028 11:45:13.419640  169037 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 11:45:13.419702  169037 start.go:340] cluster config:
	{Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-450140 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:45:13.419835  169037 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:45:13.423185  169037 out.go:177] * Starting "multinode-450140" primary control-plane node in "multinode-450140" cluster
	I1028 11:45:13.424681  169037 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:45:13.424730  169037 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 11:45:13.424741  169037 cache.go:56] Caching tarball of preloaded images
	I1028 11:45:13.424844  169037 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 11:45:13.424858  169037 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 11:45:13.424969  169037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/config.json ...
	I1028 11:45:13.425171  169037 start.go:360] acquireMachinesLock for multinode-450140: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:45:13.425219  169037 start.go:364] duration metric: took 27.139µs to acquireMachinesLock for "multinode-450140"
	I1028 11:45:13.425240  169037 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:45:13.425248  169037 fix.go:54] fixHost starting: 
	I1028 11:45:13.425499  169037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:45:13.425547  169037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:45:13.441286  169037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I1028 11:45:13.441785  169037 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:45:13.442272  169037 main.go:141] libmachine: Using API Version  1
	I1028 11:45:13.442295  169037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:45:13.442576  169037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:45:13.442757  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:45:13.442888  169037 main.go:141] libmachine: (multinode-450140) Calling .GetState
	I1028 11:45:13.444501  169037 fix.go:112] recreateIfNeeded on multinode-450140: state=Running err=<nil>
	W1028 11:45:13.444521  169037 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:45:13.446590  169037 out.go:177] * Updating the running kvm2 "multinode-450140" VM ...
	I1028 11:45:13.448117  169037 machine.go:93] provisionDockerMachine start ...
	I1028 11:45:13.448135  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:45:13.448333  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.451048  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.451515  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.451540  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.451657  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.451835  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.452014  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.452173  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.452375  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:13.452607  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:13.452621  169037 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:45:13.555199  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-450140
	
	I1028 11:45:13.555233  169037 main.go:141] libmachine: (multinode-450140) Calling .GetMachineName
	I1028 11:45:13.555494  169037 buildroot.go:166] provisioning hostname "multinode-450140"
	I1028 11:45:13.555518  169037 main.go:141] libmachine: (multinode-450140) Calling .GetMachineName
	I1028 11:45:13.555701  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.558726  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.559020  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.559046  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.559259  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.559577  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.559777  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.559951  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.560153  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:13.560354  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:13.560369  169037 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-450140 && echo "multinode-450140" | sudo tee /etc/hostname
	I1028 11:45:13.683996  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-450140
	
	I1028 11:45:13.684031  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.687391  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.687961  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.687993  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.688173  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.688405  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.688599  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.688750  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.688913  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:13.689132  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:13.689152  169037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-450140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-450140/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-450140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:45:13.790915  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:45:13.790948  169037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:45:13.790993  169037 buildroot.go:174] setting up certificates
	I1028 11:45:13.791005  169037 provision.go:84] configureAuth start
	I1028 11:45:13.791021  169037 main.go:141] libmachine: (multinode-450140) Calling .GetMachineName
	I1028 11:45:13.791338  169037 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:45:13.794136  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.794519  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.794540  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.794720  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.796958  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.797388  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.797424  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.797628  169037 provision.go:143] copyHostCerts
	I1028 11:45:13.797658  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:45:13.797704  169037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:45:13.797717  169037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:45:13.797788  169037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:45:13.797878  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:45:13.797896  169037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:45:13.797902  169037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:45:13.797926  169037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:45:13.797985  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:45:13.798008  169037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:45:13.798012  169037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:45:13.798033  169037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:45:13.798097  169037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.multinode-450140 san=[127.0.0.1 192.168.39.184 localhost minikube multinode-450140]
	I1028 11:45:13.950957  169037 provision.go:177] copyRemoteCerts
	I1028 11:45:13.951021  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:45:13.951045  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:13.954061  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.954465  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:13.954489  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:13.954648  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:13.954829  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:13.955034  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:13.955173  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:45:14.037088  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1028 11:45:14.037172  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1028 11:45:14.064480  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1028 11:45:14.064555  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 11:45:14.089739  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1028 11:45:14.089823  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:45:14.116166  169037 provision.go:87] duration metric: took 325.144551ms to configureAuth
	I1028 11:45:14.116198  169037 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:45:14.116439  169037 config.go:182] Loaded profile config "multinode-450140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:45:14.116527  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:45:14.119466  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:14.119842  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:45:14.119886  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:45:14.120046  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:45:14.120226  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:14.120385  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:45:14.120525  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:45:14.120684  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:45:14.120862  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:45:14.120881  169037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:46:44.852498  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:46:44.852532  169037 machine.go:96] duration metric: took 1m31.404402849s to provisionDockerMachine
	I1028 11:46:44.852549  169037 start.go:293] postStartSetup for "multinode-450140" (driver="kvm2")
	I1028 11:46:44.852566  169037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:46:44.852592  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:44.852962  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:46:44.852998  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:44.856491  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.856939  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:44.856958  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.857173  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:44.857381  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:44.857551  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:44.857713  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:46:44.937837  169037 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:46:44.942421  169037 command_runner.go:130] > NAME=Buildroot
	I1028 11:46:44.942444  169037 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1028 11:46:44.942450  169037 command_runner.go:130] > ID=buildroot
	I1028 11:46:44.942457  169037 command_runner.go:130] > VERSION_ID=2023.02.9
	I1028 11:46:44.942464  169037 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1028 11:46:44.942499  169037 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:46:44.942515  169037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:46:44.942589  169037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:46:44.942710  169037 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:46:44.942728  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /etc/ssl/certs/1403032.pem
	I1028 11:46:44.942885  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:46:44.952524  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:46:44.978535  169037 start.go:296] duration metric: took 125.963227ms for postStartSetup
	I1028 11:46:44.978580  169037 fix.go:56] duration metric: took 1m31.55333138s for fixHost
	I1028 11:46:44.978598  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:44.981553  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.981937  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:44.981967  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:44.982159  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:44.982347  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:44.982490  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:44.982650  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:44.982790  169037 main.go:141] libmachine: Using SSH client type: native
	I1028 11:46:44.983015  169037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I1028 11:46:44.983028  169037 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:46:45.083362  169037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116005.061035943
	
	I1028 11:46:45.083389  169037 fix.go:216] guest clock: 1730116005.061035943
	I1028 11:46:45.083400  169037 fix.go:229] Guest: 2024-10-28 11:46:45.061035943 +0000 UTC Remote: 2024-10-28 11:46:44.978583662 +0000 UTC m=+91.692702822 (delta=82.452281ms)
	I1028 11:46:45.083428  169037 fix.go:200] guest clock delta is within tolerance: 82.452281ms
	I1028 11:46:45.083442  169037 start.go:83] releasing machines lock for "multinode-450140", held for 1m31.658210704s
	I1028 11:46:45.083471  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.083715  169037 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:46:45.086620  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.087001  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:45.087041  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.087214  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.087731  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.087898  169037 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:46:45.088011  169037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:46:45.088054  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:45.088098  169037 ssh_runner.go:195] Run: cat /version.json
	I1028 11:46:45.088127  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:46:45.090595  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.090916  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.090981  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:45.091001  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.091181  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:45.091370  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:45.091444  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:45.091470  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:45.091516  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:45.091626  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:46:45.091695  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:46:45.091779  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:46:45.091881  169037 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:46:45.092030  169037 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:46:45.192437  169037 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1028 11:46:45.193418  169037 command_runner.go:130] > {"iso_version": "v1.34.0-1729002252-19806", "kicbase_version": "v0.0.45-1728382586-19774", "minikube_version": "v1.34.0", "commit": "0b046a85be42f4631dd3453091a30d7fc1803a43"}
	I1028 11:46:45.193589  169037 ssh_runner.go:195] Run: systemctl --version
	I1028 11:46:45.199640  169037 command_runner.go:130] > systemd 252 (252)
	I1028 11:46:45.199685  169037 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1028 11:46:45.199738  169037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:46:45.357469  169037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1028 11:46:45.366728  169037 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1028 11:46:45.366776  169037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:46:45.366822  169037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:46:45.377495  169037 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 11:46:45.377519  169037 start.go:495] detecting cgroup driver to use...
	I1028 11:46:45.377587  169037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:46:45.395950  169037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:46:45.410707  169037 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:46:45.410758  169037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:46:45.426056  169037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:46:45.440858  169037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:46:45.597856  169037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:46:45.745949  169037 docker.go:233] disabling docker service ...
	I1028 11:46:45.746028  169037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:46:45.768547  169037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:46:45.785660  169037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:46:45.936848  169037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:46:46.087764  169037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:46:46.102753  169037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:46:46.123060  169037 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1028 11:46:46.123567  169037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 11:46:46.123627  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.134754  169037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:46:46.134823  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.145652  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.156123  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.166855  169037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:46:46.178020  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.189033  169037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.201263  169037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:46:46.212413  169037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:46:46.222573  169037 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1028 11:46:46.222678  169037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:46:46.232467  169037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:46:46.368883  169037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:46:46.570557  169037 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:46:46.570621  169037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:46:46.576450  169037 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1028 11:46:46.576478  169037 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1028 11:46:46.576487  169037 command_runner.go:130] > Device: 0,22	Inode: 1259        Links: 1
	I1028 11:46:46.576496  169037 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:46:46.576503  169037 command_runner.go:130] > Access: 2024-10-28 11:46:46.438523975 +0000
	I1028 11:46:46.576511  169037 command_runner.go:130] > Modify: 2024-10-28 11:46:46.438523975 +0000
	I1028 11:46:46.576517  169037 command_runner.go:130] > Change: 2024-10-28 11:46:46.438523975 +0000
	I1028 11:46:46.576522  169037 command_runner.go:130] >  Birth: -
	I1028 11:46:46.576564  169037 start.go:563] Will wait 60s for crictl version
	I1028 11:46:46.576622  169037 ssh_runner.go:195] Run: which crictl
	I1028 11:46:46.580882  169037 command_runner.go:130] > /usr/bin/crictl
	I1028 11:46:46.581035  169037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:46:46.623175  169037 command_runner.go:130] > Version:  0.1.0
	I1028 11:46:46.623208  169037 command_runner.go:130] > RuntimeName:  cri-o
	I1028 11:46:46.623215  169037 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1028 11:46:46.623223  169037 command_runner.go:130] > RuntimeApiVersion:  v1
	I1028 11:46:46.623309  169037 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:46:46.623391  169037 ssh_runner.go:195] Run: crio --version
	I1028 11:46:46.653549  169037 command_runner.go:130] > crio version 1.29.1
	I1028 11:46:46.653570  169037 command_runner.go:130] > Version:        1.29.1
	I1028 11:46:46.653576  169037 command_runner.go:130] > GitCommit:      unknown
	I1028 11:46:46.653581  169037 command_runner.go:130] > GitCommitDate:  unknown
	I1028 11:46:46.653585  169037 command_runner.go:130] > GitTreeState:   clean
	I1028 11:46:46.653590  169037 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 11:46:46.653594  169037 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 11:46:46.653598  169037 command_runner.go:130] > Compiler:       gc
	I1028 11:46:46.653602  169037 command_runner.go:130] > Platform:       linux/amd64
	I1028 11:46:46.653606  169037 command_runner.go:130] > Linkmode:       dynamic
	I1028 11:46:46.653610  169037 command_runner.go:130] > BuildTags:      
	I1028 11:46:46.653615  169037 command_runner.go:130] >   containers_image_ostree_stub
	I1028 11:46:46.653619  169037 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 11:46:46.653623  169037 command_runner.go:130] >   btrfs_noversion
	I1028 11:46:46.653662  169037 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 11:46:46.653679  169037 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 11:46:46.653684  169037 command_runner.go:130] >   seccomp
	I1028 11:46:46.653687  169037 command_runner.go:130] > LDFlags:          unknown
	I1028 11:46:46.653701  169037 command_runner.go:130] > SeccompEnabled:   true
	I1028 11:46:46.653708  169037 command_runner.go:130] > AppArmorEnabled:  false
	I1028 11:46:46.653811  169037 ssh_runner.go:195] Run: crio --version
	I1028 11:46:46.684952  169037 command_runner.go:130] > crio version 1.29.1
	I1028 11:46:46.684998  169037 command_runner.go:130] > Version:        1.29.1
	I1028 11:46:46.685009  169037 command_runner.go:130] > GitCommit:      unknown
	I1028 11:46:46.685014  169037 command_runner.go:130] > GitCommitDate:  unknown
	I1028 11:46:46.685020  169037 command_runner.go:130] > GitTreeState:   clean
	I1028 11:46:46.685028  169037 command_runner.go:130] > BuildDate:      2024-10-15T20:00:52Z
	I1028 11:46:46.685035  169037 command_runner.go:130] > GoVersion:      go1.21.6
	I1028 11:46:46.685042  169037 command_runner.go:130] > Compiler:       gc
	I1028 11:46:46.685062  169037 command_runner.go:130] > Platform:       linux/amd64
	I1028 11:46:46.685070  169037 command_runner.go:130] > Linkmode:       dynamic
	I1028 11:46:46.685082  169037 command_runner.go:130] > BuildTags:      
	I1028 11:46:46.685090  169037 command_runner.go:130] >   containers_image_ostree_stub
	I1028 11:46:46.685099  169037 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1028 11:46:46.685106  169037 command_runner.go:130] >   btrfs_noversion
	I1028 11:46:46.685115  169037 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1028 11:46:46.685123  169037 command_runner.go:130] >   libdm_no_deferred_remove
	I1028 11:46:46.685132  169037 command_runner.go:130] >   seccomp
	I1028 11:46:46.685141  169037 command_runner.go:130] > LDFlags:          unknown
	I1028 11:46:46.685151  169037 command_runner.go:130] > SeccompEnabled:   true
	I1028 11:46:46.685158  169037 command_runner.go:130] > AppArmorEnabled:  false
	I1028 11:46:46.688680  169037 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 11:46:46.690363  169037 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:46:46.692879  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:46.693251  169037 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:46:46.693276  169037 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:46:46.693455  169037 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:46:46.698318  169037 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1028 11:46:46.698435  169037 kubeadm.go:883] updating cluster {Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:46:46.698579  169037 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 11:46:46.698626  169037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:46:46.743579  169037 command_runner.go:130] > {
	I1028 11:46:46.743600  169037 command_runner.go:130] >   "images": [
	I1028 11:46:46.743605  169037 command_runner.go:130] >     {
	I1028 11:46:46.743613  169037 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 11:46:46.743617  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743623  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 11:46:46.743626  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743630  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743639  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 11:46:46.743646  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 11:46:46.743650  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743655  169037 command_runner.go:130] >       "size": "94965812",
	I1028 11:46:46.743659  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743666  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.743672  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743677  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743681  169037 command_runner.go:130] >     },
	I1028 11:46:46.743686  169037 command_runner.go:130] >     {
	I1028 11:46:46.743692  169037 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 11:46:46.743698  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743704  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 11:46:46.743707  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743711  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743719  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 11:46:46.743726  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 11:46:46.743732  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743736  169037 command_runner.go:130] >       "size": "1363676",
	I1028 11:46:46.743742  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743750  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.743754  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743758  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743764  169037 command_runner.go:130] >     },
	I1028 11:46:46.743768  169037 command_runner.go:130] >     {
	I1028 11:46:46.743774  169037 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 11:46:46.743780  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743785  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 11:46:46.743789  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743792  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743800  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 11:46:46.743809  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 11:46:46.743813  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743819  169037 command_runner.go:130] >       "size": "31470524",
	I1028 11:46:46.743823  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743829  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.743833  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743840  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743844  169037 command_runner.go:130] >     },
	I1028 11:46:46.743850  169037 command_runner.go:130] >     {
	I1028 11:46:46.743856  169037 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 11:46:46.743862  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743867  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 11:46:46.743873  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743877  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743886  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 11:46:46.743899  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 11:46:46.743905  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743909  169037 command_runner.go:130] >       "size": "63273227",
	I1028 11:46:46.743915  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.743919  169037 command_runner.go:130] >       "username": "nonroot",
	I1028 11:46:46.743925  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.743937  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.743943  169037 command_runner.go:130] >     },
	I1028 11:46:46.743947  169037 command_runner.go:130] >     {
	I1028 11:46:46.743955  169037 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 11:46:46.743959  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.743964  169037 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 11:46:46.743968  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743972  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.743981  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 11:46:46.743989  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 11:46:46.743995  169037 command_runner.go:130] >       ],
	I1028 11:46:46.743999  169037 command_runner.go:130] >       "size": "149009664",
	I1028 11:46:46.744005  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744009  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744015  169037 command_runner.go:130] >       },
	I1028 11:46:46.744018  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744034  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744041  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744045  169037 command_runner.go:130] >     },
	I1028 11:46:46.744050  169037 command_runner.go:130] >     {
	I1028 11:46:46.744056  169037 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 11:46:46.744063  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744068  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 11:46:46.744073  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744077  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744086  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 11:46:46.744096  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 11:46:46.744102  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744106  169037 command_runner.go:130] >       "size": "95274464",
	I1028 11:46:46.744112  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744116  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744122  169037 command_runner.go:130] >       },
	I1028 11:46:46.744126  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744133  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744137  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744143  169037 command_runner.go:130] >     },
	I1028 11:46:46.744147  169037 command_runner.go:130] >     {
	I1028 11:46:46.744155  169037 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 11:46:46.744161  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744167  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 11:46:46.744172  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744176  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744186  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 11:46:46.744195  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 11:46:46.744199  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744206  169037 command_runner.go:130] >       "size": "89474374",
	I1028 11:46:46.744210  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744213  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744219  169037 command_runner.go:130] >       },
	I1028 11:46:46.744223  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744229  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744233  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744239  169037 command_runner.go:130] >     },
	I1028 11:46:46.744242  169037 command_runner.go:130] >     {
	I1028 11:46:46.744250  169037 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 11:46:46.744254  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744259  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 11:46:46.744266  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744269  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744285  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 11:46:46.744294  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 11:46:46.744298  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744304  169037 command_runner.go:130] >       "size": "92783513",
	I1028 11:46:46.744308  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.744312  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744315  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744320  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744324  169037 command_runner.go:130] >     },
	I1028 11:46:46.744327  169037 command_runner.go:130] >     {
	I1028 11:46:46.744332  169037 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 11:46:46.744336  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744340  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 11:46:46.744344  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744347  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744354  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 11:46:46.744361  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 11:46:46.744364  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744368  169037 command_runner.go:130] >       "size": "68457798",
	I1028 11:46:46.744371  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744374  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.744377  169037 command_runner.go:130] >       },
	I1028 11:46:46.744381  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744384  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744388  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.744390  169037 command_runner.go:130] >     },
	I1028 11:46:46.744394  169037 command_runner.go:130] >     {
	I1028 11:46:46.744399  169037 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 11:46:46.744403  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.744407  169037 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 11:46:46.744413  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744417  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.744425  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 11:46:46.744434  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 11:46:46.744440  169037 command_runner.go:130] >       ],
	I1028 11:46:46.744444  169037 command_runner.go:130] >       "size": "742080",
	I1028 11:46:46.744450  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.744454  169037 command_runner.go:130] >         "value": "65535"
	I1028 11:46:46.744460  169037 command_runner.go:130] >       },
	I1028 11:46:46.744464  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.744471  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.744474  169037 command_runner.go:130] >       "pinned": true
	I1028 11:46:46.744481  169037 command_runner.go:130] >     }
	I1028 11:46:46.744484  169037 command_runner.go:130] >   ]
	I1028 11:46:46.744490  169037 command_runner.go:130] > }
	I1028 11:46:46.745075  169037 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:46:46.745095  169037 crio.go:433] Images already preloaded, skipping extraction
	I1028 11:46:46.745143  169037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:46:46.778942  169037 command_runner.go:130] > {
	I1028 11:46:46.778966  169037 command_runner.go:130] >   "images": [
	I1028 11:46:46.778974  169037 command_runner.go:130] >     {
	I1028 11:46:46.778981  169037 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1028 11:46:46.778986  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.778992  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1028 11:46:46.778996  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779000  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779011  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1028 11:46:46.779021  169037 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1028 11:46:46.779024  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779029  169037 command_runner.go:130] >       "size": "94965812",
	I1028 11:46:46.779033  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779041  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779052  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779057  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779061  169037 command_runner.go:130] >     },
	I1028 11:46:46.779064  169037 command_runner.go:130] >     {
	I1028 11:46:46.779070  169037 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1028 11:46:46.779075  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779080  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1028 11:46:46.779083  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779087  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779094  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1028 11:46:46.779101  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1028 11:46:46.779105  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779109  169037 command_runner.go:130] >       "size": "1363676",
	I1028 11:46:46.779113  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779152  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779161  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779165  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779168  169037 command_runner.go:130] >     },
	I1028 11:46:46.779171  169037 command_runner.go:130] >     {
	I1028 11:46:46.779177  169037 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1028 11:46:46.779181  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779186  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1028 11:46:46.779191  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779195  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779202  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1028 11:46:46.779210  169037 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1028 11:46:46.779215  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779223  169037 command_runner.go:130] >       "size": "31470524",
	I1028 11:46:46.779227  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779230  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779234  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779243  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779247  169037 command_runner.go:130] >     },
	I1028 11:46:46.779259  169037 command_runner.go:130] >     {
	I1028 11:46:46.779268  169037 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1028 11:46:46.779273  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779278  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1028 11:46:46.779282  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779286  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779294  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1028 11:46:46.779305  169037 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1028 11:46:46.779308  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779312  169037 command_runner.go:130] >       "size": "63273227",
	I1028 11:46:46.779317  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779325  169037 command_runner.go:130] >       "username": "nonroot",
	I1028 11:46:46.779331  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779335  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779340  169037 command_runner.go:130] >     },
	I1028 11:46:46.779344  169037 command_runner.go:130] >     {
	I1028 11:46:46.779350  169037 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1028 11:46:46.779355  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779360  169037 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1028 11:46:46.779364  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779367  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779374  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1028 11:46:46.779383  169037 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1028 11:46:46.779387  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779392  169037 command_runner.go:130] >       "size": "149009664",
	I1028 11:46:46.779395  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779399  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779403  169037 command_runner.go:130] >       },
	I1028 11:46:46.779407  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779410  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779415  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779419  169037 command_runner.go:130] >     },
	I1028 11:46:46.779422  169037 command_runner.go:130] >     {
	I1028 11:46:46.779431  169037 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1028 11:46:46.779435  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779440  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1028 11:46:46.779446  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779451  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779460  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1028 11:46:46.779469  169037 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1028 11:46:46.779475  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779479  169037 command_runner.go:130] >       "size": "95274464",
	I1028 11:46:46.779485  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779490  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779495  169037 command_runner.go:130] >       },
	I1028 11:46:46.779500  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779506  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779512  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779518  169037 command_runner.go:130] >     },
	I1028 11:46:46.779521  169037 command_runner.go:130] >     {
	I1028 11:46:46.779527  169037 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1028 11:46:46.779533  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779538  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1028 11:46:46.779541  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779545  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779553  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1028 11:46:46.779562  169037 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1028 11:46:46.779568  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779575  169037 command_runner.go:130] >       "size": "89474374",
	I1028 11:46:46.779579  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779585  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779589  169037 command_runner.go:130] >       },
	I1028 11:46:46.779595  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779599  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779605  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779608  169037 command_runner.go:130] >     },
	I1028 11:46:46.779614  169037 command_runner.go:130] >     {
	I1028 11:46:46.779620  169037 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1028 11:46:46.779627  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779632  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1028 11:46:46.779638  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779642  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779658  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1028 11:46:46.779667  169037 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1028 11:46:46.779671  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779677  169037 command_runner.go:130] >       "size": "92783513",
	I1028 11:46:46.779681  169037 command_runner.go:130] >       "uid": null,
	I1028 11:46:46.779687  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779691  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779698  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779702  169037 command_runner.go:130] >     },
	I1028 11:46:46.779708  169037 command_runner.go:130] >     {
	I1028 11:46:46.779714  169037 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1028 11:46:46.779721  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779726  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1028 11:46:46.779732  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779736  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779745  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1028 11:46:46.779754  169037 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1028 11:46:46.779760  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779764  169037 command_runner.go:130] >       "size": "68457798",
	I1028 11:46:46.779770  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779774  169037 command_runner.go:130] >         "value": "0"
	I1028 11:46:46.779780  169037 command_runner.go:130] >       },
	I1028 11:46:46.779785  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779791  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779795  169037 command_runner.go:130] >       "pinned": false
	I1028 11:46:46.779800  169037 command_runner.go:130] >     },
	I1028 11:46:46.779804  169037 command_runner.go:130] >     {
	I1028 11:46:46.779812  169037 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1028 11:46:46.779818  169037 command_runner.go:130] >       "repoTags": [
	I1028 11:46:46.779823  169037 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1028 11:46:46.779829  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779833  169037 command_runner.go:130] >       "repoDigests": [
	I1028 11:46:46.779842  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1028 11:46:46.779854  169037 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1028 11:46:46.779860  169037 command_runner.go:130] >       ],
	I1028 11:46:46.779864  169037 command_runner.go:130] >       "size": "742080",
	I1028 11:46:46.779867  169037 command_runner.go:130] >       "uid": {
	I1028 11:46:46.779874  169037 command_runner.go:130] >         "value": "65535"
	I1028 11:46:46.779877  169037 command_runner.go:130] >       },
	I1028 11:46:46.779883  169037 command_runner.go:130] >       "username": "",
	I1028 11:46:46.779887  169037 command_runner.go:130] >       "spec": null,
	I1028 11:46:46.779893  169037 command_runner.go:130] >       "pinned": true
	I1028 11:46:46.779897  169037 command_runner.go:130] >     }
	I1028 11:46:46.779902  169037 command_runner.go:130] >   ]
	I1028 11:46:46.779905  169037 command_runner.go:130] > }
	I1028 11:46:46.780519  169037 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 11:46:46.780538  169037 cache_images.go:84] Images are preloaded, skipping loading
	I1028 11:46:46.780546  169037 kubeadm.go:934] updating node { 192.168.39.184 8443 v1.31.2 crio true true} ...
	I1028 11:46:46.780668  169037 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-450140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:46:46.780767  169037 ssh_runner.go:195] Run: crio config
	I1028 11:46:46.815684  169037 command_runner.go:130] ! time="2024-10-28 11:46:46.793481024Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1028 11:46:46.826895  169037 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1028 11:46:46.838329  169037 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1028 11:46:46.838353  169037 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1028 11:46:46.838360  169037 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1028 11:46:46.838364  169037 command_runner.go:130] > #
	I1028 11:46:46.838387  169037 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1028 11:46:46.838395  169037 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1028 11:46:46.838402  169037 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1028 11:46:46.838409  169037 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1028 11:46:46.838413  169037 command_runner.go:130] > # reload'.
	I1028 11:46:46.838419  169037 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1028 11:46:46.838425  169037 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1028 11:46:46.838431  169037 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1028 11:46:46.838436  169037 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1028 11:46:46.838448  169037 command_runner.go:130] > [crio]
	I1028 11:46:46.838453  169037 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1028 11:46:46.838461  169037 command_runner.go:130] > # containers images, in this directory.
	I1028 11:46:46.838466  169037 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1028 11:46:46.838477  169037 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1028 11:46:46.838484  169037 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1028 11:46:46.838491  169037 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1028 11:46:46.838498  169037 command_runner.go:130] > # imagestore = ""
	I1028 11:46:46.838504  169037 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1028 11:46:46.838509  169037 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1028 11:46:46.838516  169037 command_runner.go:130] > storage_driver = "overlay"
	I1028 11:46:46.838521  169037 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1028 11:46:46.838527  169037 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1028 11:46:46.838531  169037 command_runner.go:130] > storage_option = [
	I1028 11:46:46.838536  169037 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1028 11:46:46.838545  169037 command_runner.go:130] > ]
	I1028 11:46:46.838554  169037 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1028 11:46:46.838560  169037 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1028 11:46:46.838567  169037 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1028 11:46:46.838572  169037 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1028 11:46:46.838580  169037 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1028 11:46:46.838587  169037 command_runner.go:130] > # always happen on a node reboot
	I1028 11:46:46.838592  169037 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1028 11:46:46.838606  169037 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1028 11:46:46.838615  169037 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1028 11:46:46.838622  169037 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1028 11:46:46.838627  169037 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1028 11:46:46.838637  169037 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1028 11:46:46.838647  169037 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1028 11:46:46.838654  169037 command_runner.go:130] > # internal_wipe = true
	I1028 11:46:46.838662  169037 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1028 11:46:46.838669  169037 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1028 11:46:46.838673  169037 command_runner.go:130] > # internal_repair = false
	I1028 11:46:46.838681  169037 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1028 11:46:46.838687  169037 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1028 11:46:46.838694  169037 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1028 11:46:46.838699  169037 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1028 11:46:46.838710  169037 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1028 11:46:46.838716  169037 command_runner.go:130] > [crio.api]
	I1028 11:46:46.838721  169037 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1028 11:46:46.838728  169037 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1028 11:46:46.838733  169037 command_runner.go:130] > # IP address on which the stream server will listen.
	I1028 11:46:46.838740  169037 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1028 11:46:46.838746  169037 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1028 11:46:46.838754  169037 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1028 11:46:46.838757  169037 command_runner.go:130] > # stream_port = "0"
	I1028 11:46:46.838763  169037 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1028 11:46:46.838770  169037 command_runner.go:130] > # stream_enable_tls = false
	I1028 11:46:46.838780  169037 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1028 11:46:46.838787  169037 command_runner.go:130] > # stream_idle_timeout = ""
	I1028 11:46:46.838793  169037 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1028 11:46:46.838802  169037 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1028 11:46:46.838808  169037 command_runner.go:130] > # minutes.
	I1028 11:46:46.838812  169037 command_runner.go:130] > # stream_tls_cert = ""
	I1028 11:46:46.838819  169037 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1028 11:46:46.838828  169037 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1028 11:46:46.838833  169037 command_runner.go:130] > # stream_tls_key = ""
	I1028 11:46:46.838841  169037 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1028 11:46:46.838847  169037 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1028 11:46:46.838868  169037 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1028 11:46:46.838874  169037 command_runner.go:130] > # stream_tls_ca = ""
	I1028 11:46:46.838881  169037 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 11:46:46.838885  169037 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1028 11:46:46.838893  169037 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1028 11:46:46.838899  169037 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1028 11:46:46.838906  169037 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1028 11:46:46.838913  169037 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1028 11:46:46.838919  169037 command_runner.go:130] > [crio.runtime]
	I1028 11:46:46.838925  169037 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1028 11:46:46.838933  169037 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1028 11:46:46.838937  169037 command_runner.go:130] > # "nofile=1024:2048"
	I1028 11:46:46.838942  169037 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1028 11:46:46.838947  169037 command_runner.go:130] > # default_ulimits = [
	I1028 11:46:46.838950  169037 command_runner.go:130] > # ]
	I1028 11:46:46.838956  169037 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1028 11:46:46.838964  169037 command_runner.go:130] > # no_pivot = false
	I1028 11:46:46.838970  169037 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1028 11:46:46.838980  169037 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1028 11:46:46.838985  169037 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1028 11:46:46.838990  169037 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1028 11:46:46.838998  169037 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1028 11:46:46.839010  169037 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 11:46:46.839017  169037 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1028 11:46:46.839021  169037 command_runner.go:130] > # Cgroup setting for conmon
	I1028 11:46:46.839028  169037 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1028 11:46:46.839034  169037 command_runner.go:130] > conmon_cgroup = "pod"
	I1028 11:46:46.839040  169037 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1028 11:46:46.839046  169037 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1028 11:46:46.839055  169037 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1028 11:46:46.839059  169037 command_runner.go:130] > conmon_env = [
	I1028 11:46:46.839067  169037 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 11:46:46.839070  169037 command_runner.go:130] > ]
	I1028 11:46:46.839078  169037 command_runner.go:130] > # Additional environment variables to set for all the
	I1028 11:46:46.839085  169037 command_runner.go:130] > # containers. These are overridden if set in the
	I1028 11:46:46.839091  169037 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1028 11:46:46.839097  169037 command_runner.go:130] > # default_env = [
	I1028 11:46:46.839101  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839110  169037 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1028 11:46:46.839119  169037 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1028 11:46:46.839125  169037 command_runner.go:130] > # selinux = false
	I1028 11:46:46.839131  169037 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1028 11:46:46.839139  169037 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1028 11:46:46.839147  169037 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1028 11:46:46.839151  169037 command_runner.go:130] > # seccomp_profile = ""
	I1028 11:46:46.839159  169037 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1028 11:46:46.839164  169037 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1028 11:46:46.839172  169037 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1028 11:46:46.839179  169037 command_runner.go:130] > # which might increase security.
	I1028 11:46:46.839183  169037 command_runner.go:130] > # This option is currently deprecated,
	I1028 11:46:46.839193  169037 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1028 11:46:46.839200  169037 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1028 11:46:46.839206  169037 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1028 11:46:46.839214  169037 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1028 11:46:46.839227  169037 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1028 11:46:46.839236  169037 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1028 11:46:46.839244  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.839251  169037 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1028 11:46:46.839257  169037 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1028 11:46:46.839264  169037 command_runner.go:130] > # the cgroup blockio controller.
	I1028 11:46:46.839268  169037 command_runner.go:130] > # blockio_config_file = ""
	I1028 11:46:46.839277  169037 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1028 11:46:46.839283  169037 command_runner.go:130] > # blockio parameters.
	I1028 11:46:46.839287  169037 command_runner.go:130] > # blockio_reload = false
	I1028 11:46:46.839296  169037 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1028 11:46:46.839300  169037 command_runner.go:130] > # irqbalance daemon.
	I1028 11:46:46.839305  169037 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1028 11:46:46.839313  169037 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1028 11:46:46.839322  169037 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1028 11:46:46.839329  169037 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1028 11:46:46.839337  169037 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1028 11:46:46.839346  169037 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1028 11:46:46.839353  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.839357  169037 command_runner.go:130] > # rdt_config_file = ""
	I1028 11:46:46.839365  169037 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1028 11:46:46.839371  169037 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1028 11:46:46.839394  169037 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1028 11:46:46.839402  169037 command_runner.go:130] > # separate_pull_cgroup = ""
	I1028 11:46:46.839408  169037 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1028 11:46:46.839416  169037 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1028 11:46:46.839423  169037 command_runner.go:130] > # will be added.
	I1028 11:46:46.839427  169037 command_runner.go:130] > # default_capabilities = [
	I1028 11:46:46.839433  169037 command_runner.go:130] > # 	"CHOWN",
	I1028 11:46:46.839437  169037 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1028 11:46:46.839443  169037 command_runner.go:130] > # 	"FSETID",
	I1028 11:46:46.839447  169037 command_runner.go:130] > # 	"FOWNER",
	I1028 11:46:46.839451  169037 command_runner.go:130] > # 	"SETGID",
	I1028 11:46:46.839455  169037 command_runner.go:130] > # 	"SETUID",
	I1028 11:46:46.839461  169037 command_runner.go:130] > # 	"SETPCAP",
	I1028 11:46:46.839465  169037 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1028 11:46:46.839470  169037 command_runner.go:130] > # 	"KILL",
	I1028 11:46:46.839473  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839483  169037 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1028 11:46:46.839492  169037 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1028 11:46:46.839499  169037 command_runner.go:130] > # add_inheritable_capabilities = false
	I1028 11:46:46.839508  169037 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1028 11:46:46.839515  169037 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 11:46:46.839521  169037 command_runner.go:130] > default_sysctls = [
	I1028 11:46:46.839526  169037 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1028 11:46:46.839531  169037 command_runner.go:130] > ]
	I1028 11:46:46.839536  169037 command_runner.go:130] > # List of devices on the host that a
	I1028 11:46:46.839544  169037 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1028 11:46:46.839550  169037 command_runner.go:130] > # allowed_devices = [
	I1028 11:46:46.839554  169037 command_runner.go:130] > # 	"/dev/fuse",
	I1028 11:46:46.839559  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839564  169037 command_runner.go:130] > # List of additional devices. specified as
	I1028 11:46:46.839573  169037 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1028 11:46:46.839580  169037 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1028 11:46:46.839586  169037 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1028 11:46:46.839592  169037 command_runner.go:130] > # additional_devices = [
	I1028 11:46:46.839595  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839601  169037 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1028 11:46:46.839607  169037 command_runner.go:130] > # cdi_spec_dirs = [
	I1028 11:46:46.839611  169037 command_runner.go:130] > # 	"/etc/cdi",
	I1028 11:46:46.839617  169037 command_runner.go:130] > # 	"/var/run/cdi",
	I1028 11:46:46.839621  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839627  169037 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1028 11:46:46.839635  169037 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1028 11:46:46.839642  169037 command_runner.go:130] > # Defaults to false.
	I1028 11:46:46.839648  169037 command_runner.go:130] > # device_ownership_from_security_context = false
	I1028 11:46:46.839656  169037 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1028 11:46:46.839665  169037 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1028 11:46:46.839670  169037 command_runner.go:130] > # hooks_dir = [
	I1028 11:46:46.839675  169037 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1028 11:46:46.839680  169037 command_runner.go:130] > # ]
	I1028 11:46:46.839686  169037 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1028 11:46:46.839695  169037 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1028 11:46:46.839702  169037 command_runner.go:130] > # its default mounts from the following two files:
	I1028 11:46:46.839705  169037 command_runner.go:130] > #
	I1028 11:46:46.839713  169037 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1028 11:46:46.839722  169037 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1028 11:46:46.839730  169037 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1028 11:46:46.839733  169037 command_runner.go:130] > #
	I1028 11:46:46.839738  169037 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1028 11:46:46.839747  169037 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1028 11:46:46.839756  169037 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1028 11:46:46.839764  169037 command_runner.go:130] > #      only add mounts it finds in this file.
	I1028 11:46:46.839767  169037 command_runner.go:130] > #
	I1028 11:46:46.839771  169037 command_runner.go:130] > # default_mounts_file = ""
	I1028 11:46:46.839779  169037 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1028 11:46:46.839785  169037 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1028 11:46:46.839791  169037 command_runner.go:130] > pids_limit = 1024
	I1028 11:46:46.839797  169037 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1028 11:46:46.839805  169037 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1028 11:46:46.839813  169037 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1028 11:46:46.839821  169037 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1028 11:46:46.839827  169037 command_runner.go:130] > # log_size_max = -1
	I1028 11:46:46.839834  169037 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1028 11:46:46.839840  169037 command_runner.go:130] > # log_to_journald = false
	I1028 11:46:46.839846  169037 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1028 11:46:46.839853  169037 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1028 11:46:46.839858  169037 command_runner.go:130] > # Path to directory for container attach sockets.
	I1028 11:46:46.839865  169037 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1028 11:46:46.839871  169037 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1028 11:46:46.839878  169037 command_runner.go:130] > # bind_mount_prefix = ""
	I1028 11:46:46.839884  169037 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1028 11:46:46.839890  169037 command_runner.go:130] > # read_only = false
	I1028 11:46:46.839896  169037 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1028 11:46:46.839904  169037 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1028 11:46:46.839911  169037 command_runner.go:130] > # live configuration reload.
	I1028 11:46:46.839915  169037 command_runner.go:130] > # log_level = "info"
	I1028 11:46:46.839923  169037 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1028 11:46:46.839928  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.839935  169037 command_runner.go:130] > # log_filter = ""
	I1028 11:46:46.839940  169037 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1028 11:46:46.839949  169037 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1028 11:46:46.839953  169037 command_runner.go:130] > # separated by comma.
	I1028 11:46:46.839962  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.839966  169037 command_runner.go:130] > # uid_mappings = ""
	I1028 11:46:46.839976  169037 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1028 11:46:46.839988  169037 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1028 11:46:46.839995  169037 command_runner.go:130] > # separated by comma.
	I1028 11:46:46.840003  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.840011  169037 command_runner.go:130] > # gid_mappings = ""
	I1028 11:46:46.840018  169037 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1028 11:46:46.840025  169037 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 11:46:46.840031  169037 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 11:46:46.840038  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.840044  169037 command_runner.go:130] > # minimum_mappable_uid = -1
	I1028 11:46:46.840050  169037 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1028 11:46:46.840058  169037 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1028 11:46:46.840065  169037 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1028 11:46:46.840074  169037 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1028 11:46:46.840081  169037 command_runner.go:130] > # minimum_mappable_gid = -1
	I1028 11:46:46.840087  169037 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1028 11:46:46.840095  169037 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1028 11:46:46.840103  169037 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1028 11:46:46.840110  169037 command_runner.go:130] > # ctr_stop_timeout = 30
	I1028 11:46:46.840116  169037 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1028 11:46:46.840123  169037 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1028 11:46:46.840130  169037 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1028 11:46:46.840135  169037 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1028 11:46:46.840142  169037 command_runner.go:130] > drop_infra_ctr = false
	I1028 11:46:46.840147  169037 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1028 11:46:46.840155  169037 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1028 11:46:46.840161  169037 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1028 11:46:46.840168  169037 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1028 11:46:46.840175  169037 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1028 11:46:46.840182  169037 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1028 11:46:46.840190  169037 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1028 11:46:46.840195  169037 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1028 11:46:46.840199  169037 command_runner.go:130] > # shared_cpuset = ""
	I1028 11:46:46.840207  169037 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1028 11:46:46.840212  169037 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1028 11:46:46.840218  169037 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1028 11:46:46.840229  169037 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1028 11:46:46.840235  169037 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1028 11:46:46.840241  169037 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1028 11:46:46.840251  169037 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1028 11:46:46.840258  169037 command_runner.go:130] > # enable_criu_support = false
	I1028 11:46:46.840263  169037 command_runner.go:130] > # Enable/disable the generation of the container,
	I1028 11:46:46.840272  169037 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1028 11:46:46.840279  169037 command_runner.go:130] > # enable_pod_events = false
	I1028 11:46:46.840285  169037 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 11:46:46.840293  169037 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1028 11:46:46.840299  169037 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1028 11:46:46.840303  169037 command_runner.go:130] > # default_runtime = "runc"
	I1028 11:46:46.840311  169037 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1028 11:46:46.840318  169037 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1028 11:46:46.840329  169037 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1028 11:46:46.840340  169037 command_runner.go:130] > # creation as a file is not desired either.
	I1028 11:46:46.840350  169037 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1028 11:46:46.840357  169037 command_runner.go:130] > # the hostname is being managed dynamically.
	I1028 11:46:46.840361  169037 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1028 11:46:46.840367  169037 command_runner.go:130] > # ]
	I1028 11:46:46.840373  169037 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1028 11:46:46.840381  169037 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1028 11:46:46.840388  169037 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1028 11:46:46.840395  169037 command_runner.go:130] > # Each entry in the table should follow the format:
	I1028 11:46:46.840397  169037 command_runner.go:130] > #
	I1028 11:46:46.840402  169037 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1028 11:46:46.840409  169037 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1028 11:46:46.840434  169037 command_runner.go:130] > # runtime_type = "oci"
	I1028 11:46:46.840441  169037 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1028 11:46:46.840446  169037 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1028 11:46:46.840450  169037 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1028 11:46:46.840454  169037 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1028 11:46:46.840458  169037 command_runner.go:130] > # monitor_env = []
	I1028 11:46:46.840465  169037 command_runner.go:130] > # privileged_without_host_devices = false
	I1028 11:46:46.840469  169037 command_runner.go:130] > # allowed_annotations = []
	I1028 11:46:46.840477  169037 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1028 11:46:46.840483  169037 command_runner.go:130] > # Where:
	I1028 11:46:46.840491  169037 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1028 11:46:46.840500  169037 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1028 11:46:46.840507  169037 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1028 11:46:46.840518  169037 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1028 11:46:46.840524  169037 command_runner.go:130] > #   in $PATH.
	I1028 11:46:46.840532  169037 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1028 11:46:46.840545  169037 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1028 11:46:46.840551  169037 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1028 11:46:46.840557  169037 command_runner.go:130] > #   state.
	I1028 11:46:46.840563  169037 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1028 11:46:46.840571  169037 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1028 11:46:46.840588  169037 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1028 11:46:46.840596  169037 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1028 11:46:46.840604  169037 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1028 11:46:46.840611  169037 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1028 11:46:46.840619  169037 command_runner.go:130] > #   The currently recognized values are:
	I1028 11:46:46.840625  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1028 11:46:46.840634  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1028 11:46:46.840640  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1028 11:46:46.840647  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1028 11:46:46.840654  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1028 11:46:46.840663  169037 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1028 11:46:46.840672  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1028 11:46:46.840680  169037 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1028 11:46:46.840689  169037 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1028 11:46:46.840698  169037 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1028 11:46:46.840705  169037 command_runner.go:130] > #   deprecated option "conmon".
	I1028 11:46:46.840711  169037 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1028 11:46:46.840718  169037 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1028 11:46:46.840725  169037 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1028 11:46:46.840732  169037 command_runner.go:130] > #   should be moved to the container's cgroup
	I1028 11:46:46.840738  169037 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1028 11:46:46.840747  169037 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1028 11:46:46.840756  169037 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1028 11:46:46.840764  169037 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1028 11:46:46.840767  169037 command_runner.go:130] > #
	I1028 11:46:46.840774  169037 command_runner.go:130] > # Using the seccomp notifier feature:
	I1028 11:46:46.840780  169037 command_runner.go:130] > #
	I1028 11:46:46.840789  169037 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1028 11:46:46.840797  169037 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1028 11:46:46.840801  169037 command_runner.go:130] > #
	I1028 11:46:46.840807  169037 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1028 11:46:46.840813  169037 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1028 11:46:46.840819  169037 command_runner.go:130] > #
	I1028 11:46:46.840827  169037 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1028 11:46:46.840834  169037 command_runner.go:130] > # feature.
	I1028 11:46:46.840838  169037 command_runner.go:130] > #
	I1028 11:46:46.840845  169037 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1028 11:46:46.840851  169037 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1028 11:46:46.840859  169037 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1028 11:46:46.840867  169037 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1028 11:46:46.840876  169037 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1028 11:46:46.840879  169037 command_runner.go:130] > #
	I1028 11:46:46.840887  169037 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1028 11:46:46.840895  169037 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1028 11:46:46.840898  169037 command_runner.go:130] > #
	I1028 11:46:46.840906  169037 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1028 11:46:46.840912  169037 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1028 11:46:46.840917  169037 command_runner.go:130] > #
	I1028 11:46:46.840923  169037 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1028 11:46:46.840931  169037 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1028 11:46:46.840934  169037 command_runner.go:130] > # limitation.
	I1028 11:46:46.840940  169037 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1028 11:46:46.840945  169037 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1028 11:46:46.840949  169037 command_runner.go:130] > runtime_type = "oci"
	I1028 11:46:46.840953  169037 command_runner.go:130] > runtime_root = "/run/runc"
	I1028 11:46:46.840959  169037 command_runner.go:130] > runtime_config_path = ""
	I1028 11:46:46.840964  169037 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1028 11:46:46.840970  169037 command_runner.go:130] > monitor_cgroup = "pod"
	I1028 11:46:46.840974  169037 command_runner.go:130] > monitor_exec_cgroup = ""
	I1028 11:46:46.840978  169037 command_runner.go:130] > monitor_env = [
	I1028 11:46:46.840983  169037 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1028 11:46:46.840987  169037 command_runner.go:130] > ]
	I1028 11:46:46.840991  169037 command_runner.go:130] > privileged_without_host_devices = false
	I1028 11:46:46.841000  169037 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1028 11:46:46.841005  169037 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1028 11:46:46.841013  169037 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1028 11:46:46.841027  169037 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1028 11:46:46.841037  169037 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1028 11:46:46.841042  169037 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1028 11:46:46.841053  169037 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1028 11:46:46.841062  169037 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1028 11:46:46.841068  169037 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1028 11:46:46.841077  169037 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1028 11:46:46.841083  169037 command_runner.go:130] > # Example:
	I1028 11:46:46.841088  169037 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1028 11:46:46.841095  169037 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1028 11:46:46.841101  169037 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1028 11:46:46.841108  169037 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1028 11:46:46.841112  169037 command_runner.go:130] > # cpuset = 0
	I1028 11:46:46.841118  169037 command_runner.go:130] > # cpushares = "0-1"
	I1028 11:46:46.841121  169037 command_runner.go:130] > # Where:
	I1028 11:46:46.841127  169037 command_runner.go:130] > # The workload name is workload-type.
	I1028 11:46:46.841137  169037 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1028 11:46:46.841145  169037 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1028 11:46:46.841150  169037 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1028 11:46:46.841160  169037 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1028 11:46:46.841168  169037 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1028 11:46:46.841175  169037 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1028 11:46:46.841182  169037 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1028 11:46:46.841188  169037 command_runner.go:130] > # Default value is set to true
	I1028 11:46:46.841193  169037 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1028 11:46:46.841200  169037 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1028 11:46:46.841205  169037 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1028 11:46:46.841212  169037 command_runner.go:130] > # Default value is set to 'false'
	I1028 11:46:46.841216  169037 command_runner.go:130] > # disable_hostport_mapping = false
	I1028 11:46:46.841226  169037 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1028 11:46:46.841229  169037 command_runner.go:130] > #
	I1028 11:46:46.841235  169037 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1028 11:46:46.841241  169037 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1028 11:46:46.841247  169037 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1028 11:46:46.841253  169037 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1028 11:46:46.841261  169037 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1028 11:46:46.841264  169037 command_runner.go:130] > [crio.image]
	I1028 11:46:46.841270  169037 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1028 11:46:46.841273  169037 command_runner.go:130] > # default_transport = "docker://"
	I1028 11:46:46.841279  169037 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1028 11:46:46.841284  169037 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1028 11:46:46.841288  169037 command_runner.go:130] > # global_auth_file = ""
	I1028 11:46:46.841293  169037 command_runner.go:130] > # The image used to instantiate infra containers.
	I1028 11:46:46.841297  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.841302  169037 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1028 11:46:46.841308  169037 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1028 11:46:46.841313  169037 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1028 11:46:46.841317  169037 command_runner.go:130] > # This option supports live configuration reload.
	I1028 11:46:46.841322  169037 command_runner.go:130] > # pause_image_auth_file = ""
	I1028 11:46:46.841327  169037 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1028 11:46:46.841333  169037 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1028 11:46:46.841339  169037 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1028 11:46:46.841344  169037 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1028 11:46:46.841348  169037 command_runner.go:130] > # pause_command = "/pause"
	I1028 11:46:46.841354  169037 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1028 11:46:46.841360  169037 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1028 11:46:46.841365  169037 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1028 11:46:46.841372  169037 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1028 11:46:46.841378  169037 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1028 11:46:46.841383  169037 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1028 11:46:46.841386  169037 command_runner.go:130] > # pinned_images = [
	I1028 11:46:46.841390  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841396  169037 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1028 11:46:46.841406  169037 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1028 11:46:46.841414  169037 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1028 11:46:46.841422  169037 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1028 11:46:46.841428  169037 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1028 11:46:46.841435  169037 command_runner.go:130] > # signature_policy = ""
	I1028 11:46:46.841440  169037 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1028 11:46:46.841449  169037 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1028 11:46:46.841457  169037 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1028 11:46:46.841468  169037 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1028 11:46:46.841474  169037 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1028 11:46:46.841481  169037 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1028 11:46:46.841488  169037 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1028 11:46:46.841496  169037 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1028 11:46:46.841502  169037 command_runner.go:130] > # changing them here.
	I1028 11:46:46.841506  169037 command_runner.go:130] > # insecure_registries = [
	I1028 11:46:46.841537  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841543  169037 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1028 11:46:46.841548  169037 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1028 11:46:46.841554  169037 command_runner.go:130] > # image_volumes = "mkdir"
	I1028 11:46:46.841559  169037 command_runner.go:130] > # Temporary directory to use for storing big files
	I1028 11:46:46.841565  169037 command_runner.go:130] > # big_files_temporary_dir = ""
	I1028 11:46:46.841571  169037 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1028 11:46:46.841577  169037 command_runner.go:130] > # CNI plugins.
	I1028 11:46:46.841581  169037 command_runner.go:130] > [crio.network]
	I1028 11:46:46.841589  169037 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1028 11:46:46.841596  169037 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1028 11:46:46.841600  169037 command_runner.go:130] > # cni_default_network = ""
	I1028 11:46:46.841606  169037 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1028 11:46:46.841613  169037 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1028 11:46:46.841618  169037 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1028 11:46:46.841624  169037 command_runner.go:130] > # plugin_dirs = [
	I1028 11:46:46.841628  169037 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1028 11:46:46.841632  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841640  169037 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1028 11:46:46.841644  169037 command_runner.go:130] > [crio.metrics]
	I1028 11:46:46.841651  169037 command_runner.go:130] > # Globally enable or disable metrics support.
	I1028 11:46:46.841656  169037 command_runner.go:130] > enable_metrics = true
	I1028 11:46:46.841662  169037 command_runner.go:130] > # Specify enabled metrics collectors.
	I1028 11:46:46.841667  169037 command_runner.go:130] > # Per default all metrics are enabled.
	I1028 11:46:46.841675  169037 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1028 11:46:46.841684  169037 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1028 11:46:46.841692  169037 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1028 11:46:46.841698  169037 command_runner.go:130] > # metrics_collectors = [
	I1028 11:46:46.841702  169037 command_runner.go:130] > # 	"operations",
	I1028 11:46:46.841712  169037 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1028 11:46:46.841716  169037 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1028 11:46:46.841721  169037 command_runner.go:130] > # 	"operations_errors",
	I1028 11:46:46.841725  169037 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1028 11:46:46.841730  169037 command_runner.go:130] > # 	"image_pulls_by_name",
	I1028 11:46:46.841734  169037 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1028 11:46:46.841743  169037 command_runner.go:130] > # 	"image_pulls_failures",
	I1028 11:46:46.841749  169037 command_runner.go:130] > # 	"image_pulls_successes",
	I1028 11:46:46.841754  169037 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1028 11:46:46.841760  169037 command_runner.go:130] > # 	"image_layer_reuse",
	I1028 11:46:46.841764  169037 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1028 11:46:46.841771  169037 command_runner.go:130] > # 	"containers_oom_total",
	I1028 11:46:46.841775  169037 command_runner.go:130] > # 	"containers_oom",
	I1028 11:46:46.841781  169037 command_runner.go:130] > # 	"processes_defunct",
	I1028 11:46:46.841785  169037 command_runner.go:130] > # 	"operations_total",
	I1028 11:46:46.841792  169037 command_runner.go:130] > # 	"operations_latency_seconds",
	I1028 11:46:46.841796  169037 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1028 11:46:46.841800  169037 command_runner.go:130] > # 	"operations_errors_total",
	I1028 11:46:46.841805  169037 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1028 11:46:46.841809  169037 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1028 11:46:46.841816  169037 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1028 11:46:46.841821  169037 command_runner.go:130] > # 	"image_pulls_success_total",
	I1028 11:46:46.841827  169037 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1028 11:46:46.841832  169037 command_runner.go:130] > # 	"containers_oom_count_total",
	I1028 11:46:46.841839  169037 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1028 11:46:46.841844  169037 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1028 11:46:46.841849  169037 command_runner.go:130] > # ]
	I1028 11:46:46.841854  169037 command_runner.go:130] > # The port on which the metrics server will listen.
	I1028 11:46:46.841860  169037 command_runner.go:130] > # metrics_port = 9090
	I1028 11:46:46.841865  169037 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1028 11:46:46.841871  169037 command_runner.go:130] > # metrics_socket = ""
	I1028 11:46:46.841877  169037 command_runner.go:130] > # The certificate for the secure metrics server.
	I1028 11:46:46.841885  169037 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1028 11:46:46.841895  169037 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1028 11:46:46.841902  169037 command_runner.go:130] > # certificate on any modification event.
	I1028 11:46:46.841906  169037 command_runner.go:130] > # metrics_cert = ""
	I1028 11:46:46.841911  169037 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1028 11:46:46.841917  169037 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1028 11:46:46.841923  169037 command_runner.go:130] > # metrics_key = ""
	I1028 11:46:46.841929  169037 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1028 11:46:46.841935  169037 command_runner.go:130] > [crio.tracing]
	I1028 11:46:46.841940  169037 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1028 11:46:46.841944  169037 command_runner.go:130] > # enable_tracing = false
	I1028 11:46:46.841949  169037 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1028 11:46:46.841956  169037 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1028 11:46:46.841963  169037 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1028 11:46:46.841970  169037 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1028 11:46:46.841974  169037 command_runner.go:130] > # CRI-O NRI configuration.
	I1028 11:46:46.841977  169037 command_runner.go:130] > [crio.nri]
	I1028 11:46:46.841982  169037 command_runner.go:130] > # Globally enable or disable NRI.
	I1028 11:46:46.841986  169037 command_runner.go:130] > # enable_nri = false
	I1028 11:46:46.841992  169037 command_runner.go:130] > # NRI socket to listen on.
	I1028 11:46:46.841999  169037 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1028 11:46:46.842003  169037 command_runner.go:130] > # NRI plugin directory to use.
	I1028 11:46:46.842008  169037 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1028 11:46:46.842015  169037 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1028 11:46:46.842019  169037 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1028 11:46:46.842025  169037 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1028 11:46:46.842031  169037 command_runner.go:130] > # nri_disable_connections = false
	I1028 11:46:46.842036  169037 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1028 11:46:46.842042  169037 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1028 11:46:46.842047  169037 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1028 11:46:46.842059  169037 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1028 11:46:46.842064  169037 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1028 11:46:46.842070  169037 command_runner.go:130] > [crio.stats]
	I1028 11:46:46.842076  169037 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1028 11:46:46.842083  169037 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1028 11:46:46.842088  169037 command_runner.go:130] > # stats_collection_period = 0
	I1028 11:46:46.842187  169037 cni.go:84] Creating CNI manager for ""
	I1028 11:46:46.842196  169037 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1028 11:46:46.842207  169037 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:46:46.842233  169037 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-450140 NodeName:multinode-450140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:46:46.842367  169037 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-450140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.184"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:46:46.842431  169037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 11:46:46.853895  169037 command_runner.go:130] > kubeadm
	I1028 11:46:46.853916  169037 command_runner.go:130] > kubectl
	I1028 11:46:46.853920  169037 command_runner.go:130] > kubelet
	I1028 11:46:46.853941  169037 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:46:46.853989  169037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:46:46.864783  169037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1028 11:46:46.882757  169037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:46:46.900945  169037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1028 11:46:46.920394  169037 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I1028 11:46:46.924986  169037 command_runner.go:130] > 192.168.39.184	control-plane.minikube.internal
	I1028 11:46:46.925119  169037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:46:47.064741  169037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:46:47.080823  169037 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140 for IP: 192.168.39.184
	I1028 11:46:47.080853  169037 certs.go:194] generating shared ca certs ...
	I1028 11:46:47.080874  169037 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:46:47.081057  169037 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:46:47.081118  169037 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:46:47.081132  169037 certs.go:256] generating profile certs ...
	I1028 11:46:47.081239  169037 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/client.key
	I1028 11:46:47.081335  169037 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.key.cd51ceb4
	I1028 11:46:47.081376  169037 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.key
	I1028 11:46:47.081391  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1028 11:46:47.081404  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1028 11:46:47.081417  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1028 11:46:47.081432  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1028 11:46:47.081443  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1028 11:46:47.081455  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1028 11:46:47.081466  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1028 11:46:47.081477  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1028 11:46:47.081559  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:46:47.081604  169037 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:46:47.081617  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:46:47.081655  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:46:47.081686  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:46:47.081715  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:46:47.081756  169037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:46:47.081785  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.081799  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.081815  169037 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem -> /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.082441  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:46:47.108561  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:46:47.135124  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:46:47.160853  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:46:47.186431  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 11:46:47.212544  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:46:47.239675  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:46:47.264787  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/multinode-450140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 11:46:47.289804  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:46:47.315036  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:46:47.340038  169037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:46:47.364359  169037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:46:47.382352  169037 ssh_runner.go:195] Run: openssl version
	I1028 11:46:47.388634  169037 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1028 11:46:47.388710  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:46:47.400445  169037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.405458  169037 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.405781  169037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.405833  169037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:46:47.411890  169037 command_runner.go:130] > 3ec20f2e
	I1028 11:46:47.412051  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:46:47.422335  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:46:47.434480  169037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.439363  169037 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.439395  169037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.439444  169037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:46:47.445443  169037 command_runner.go:130] > b5213941
	I1028 11:46:47.445619  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:46:47.455825  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:46:47.467465  169037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.472447  169037 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.472494  169037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.472537  169037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:46:47.478647  169037 command_runner.go:130] > 51391683
	I1028 11:46:47.478804  169037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:46:47.488842  169037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:46:47.493678  169037 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:46:47.493717  169037 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1028 11:46:47.493726  169037 command_runner.go:130] > Device: 253,1	Inode: 7338542     Links: 1
	I1028 11:46:47.493736  169037 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1028 11:46:47.493748  169037 command_runner.go:130] > Access: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493757  169037 command_runner.go:130] > Modify: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493767  169037 command_runner.go:130] > Change: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493777  169037 command_runner.go:130] >  Birth: 2024-10-28 11:39:58.796315840 +0000
	I1028 11:46:47.493832  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:46:47.499683  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.499811  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:46:47.505794  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.505858  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:46:47.512033  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.512099  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:46:47.517851  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.517999  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:46:47.523652  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.523813  169037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:46:47.530371  169037 command_runner.go:130] > Certificate will not expire
	I1028 11:46:47.530452  169037 kubeadm.go:392] StartCluster: {Name:multinode-450140 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-450140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:46:47.530563  169037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:46:47.530600  169037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:46:47.567757  169037 command_runner.go:130] > 9984aacc72a9ff0993332fc0aad6ce2c6f5615875c7c3792438749e989d62b02
	I1028 11:46:47.567781  169037 command_runner.go:130] > 17a75cb502bdea553a2767bbe692e8c3ea1adad72f5c831f79a3e46e8d1abb6a
	I1028 11:46:47.567788  169037 command_runner.go:130] > 47a5d72b6c318b157aa2347b37035c31d864480ae18f027cab73c5ad66b69df2
	I1028 11:46:47.567795  169037 command_runner.go:130] > df86ae076d7bd3d46e4426e3b61b4c3581afabc1cea0cf28388e7963c454b7f5
	I1028 11:46:47.567801  169037 command_runner.go:130] > 09898f5c3ea283707dff548f9f360641786b8042a2a30675090bb9d1f05f5742
	I1028 11:46:47.567806  169037 command_runner.go:130] > caf0607fae8fc41f7e25dc9d1aca76ed1f31891d71edf83d4357e4c4a17affd3
	I1028 11:46:47.567811  169037 command_runner.go:130] > 1aefc2add33bd17169acd9dc5d93f640dca78b9793c9c293a4ca02b16a433764
	I1028 11:46:47.567832  169037 command_runner.go:130] > 2163c6c718431b9cd8d8eb3c8370f2383b3ada331a0a8cbdff600c64220e975b
	I1028 11:46:47.569258  169037 cri.go:89] found id: "9984aacc72a9ff0993332fc0aad6ce2c6f5615875c7c3792438749e989d62b02"
	I1028 11:46:47.569274  169037 cri.go:89] found id: "17a75cb502bdea553a2767bbe692e8c3ea1adad72f5c831f79a3e46e8d1abb6a"
	I1028 11:46:47.569280  169037 cri.go:89] found id: "47a5d72b6c318b157aa2347b37035c31d864480ae18f027cab73c5ad66b69df2"
	I1028 11:46:47.569300  169037 cri.go:89] found id: "df86ae076d7bd3d46e4426e3b61b4c3581afabc1cea0cf28388e7963c454b7f5"
	I1028 11:46:47.569314  169037 cri.go:89] found id: "09898f5c3ea283707dff548f9f360641786b8042a2a30675090bb9d1f05f5742"
	I1028 11:46:47.569317  169037 cri.go:89] found id: "caf0607fae8fc41f7e25dc9d1aca76ed1f31891d71edf83d4357e4c4a17affd3"
	I1028 11:46:47.569319  169037 cri.go:89] found id: "1aefc2add33bd17169acd9dc5d93f640dca78b9793c9c293a4ca02b16a433764"
	I1028 11:46:47.569322  169037 cri.go:89] found id: "2163c6c718431b9cd8d8eb3c8370f2383b3ada331a0a8cbdff600c64220e975b"
	I1028 11:46:47.569325  169037 cri.go:89] found id: ""
	I1028 11:46:47.569363  169037 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-450140 -n multinode-450140
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-450140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.44s)

                                                
                                    
x
+
TestPreload (181.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-083517 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1028 11:55:09.886845  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-083517 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.698103726s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-083517 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-083517 image pull gcr.io/k8s-minikube/busybox: (3.543025028s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-083517
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-083517: (7.304820045s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-083517 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1028 11:57:22.068231  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:57:38.998234  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-083517 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.283839904s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-083517 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-28 11:57:58.282381439 +0000 UTC m=+3800.950605815
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-083517 -n test-preload-083517
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-083517 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-083517 logs -n 25: (1.104455232s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140 sudo cat                                       | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m03_multinode-450140.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt                       | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m02:/home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n                                                                 | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | multinode-450140-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-450140 ssh -n multinode-450140-m02 sudo cat                                   | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-450140 node stop m03                                                          | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:42 UTC |
	| node    | multinode-450140 node start                                                             | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:42 UTC | 28 Oct 24 11:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:43 UTC |                     |
	| stop    | -p multinode-450140                                                                     | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:43 UTC |                     |
	| start   | -p multinode-450140                                                                     | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:45 UTC | 28 Oct 24 11:48 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC |                     |
	| node    | multinode-450140 node delete                                                            | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC | 28 Oct 24 11:48 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-450140 stop                                                                   | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:48 UTC |                     |
	| start   | -p multinode-450140                                                                     | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:51 UTC | 28 Oct 24 11:54 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-450140                                                                | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC |                     |
	| start   | -p multinode-450140-m02                                                                 | multinode-450140-m02 | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-450140-m03                                                                 | multinode-450140-m03 | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC | 28 Oct 24 11:54 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-450140                                                                 | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC |                     |
	| delete  | -p multinode-450140-m03                                                                 | multinode-450140-m03 | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC | 28 Oct 24 11:54 UTC |
	| delete  | -p multinode-450140                                                                     | multinode-450140     | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC | 28 Oct 24 11:54 UTC |
	| start   | -p test-preload-083517                                                                  | test-preload-083517  | jenkins | v1.34.0 | 28 Oct 24 11:54 UTC | 28 Oct 24 11:56 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-083517 image pull                                                          | test-preload-083517  | jenkins | v1.34.0 | 28 Oct 24 11:56 UTC | 28 Oct 24 11:56 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-083517                                                                  | test-preload-083517  | jenkins | v1.34.0 | 28 Oct 24 11:56 UTC | 28 Oct 24 11:56 UTC |
	| start   | -p test-preload-083517                                                                  | test-preload-083517  | jenkins | v1.34.0 | 28 Oct 24 11:56 UTC | 28 Oct 24 11:57 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-083517 image list                                                          | test-preload-083517  | jenkins | v1.34.0 | 28 Oct 24 11:57 UTC | 28 Oct 24 11:57 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 11:56:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 11:56:46.828950  173440 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:56:46.829055  173440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:56:46.829063  173440 out.go:358] Setting ErrFile to fd 2...
	I1028 11:56:46.829068  173440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:56:46.829235  173440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:56:46.829782  173440 out.go:352] Setting JSON to false
	I1028 11:56:46.830712  173440 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5950,"bootTime":1730110657,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:56:46.830820  173440 start.go:139] virtualization: kvm guest
	I1028 11:56:46.833409  173440 out.go:177] * [test-preload-083517] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:56:46.834931  173440 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:56:46.834925  173440 notify.go:220] Checking for updates...
	I1028 11:56:46.837740  173440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:56:46.839141  173440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:56:46.840605  173440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:56:46.842196  173440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:56:46.843691  173440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:56:46.845659  173440 config.go:182] Loaded profile config "test-preload-083517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1028 11:56:46.846072  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:56:46.846145  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:56:46.861718  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I1028 11:56:46.862252  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:56:46.862852  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:56:46.862875  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:56:46.863292  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:56:46.863505  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:56:46.865354  173440 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 11:56:46.866861  173440 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:56:46.867208  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:56:46.867255  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:56:46.882003  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41015
	I1028 11:56:46.882449  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:56:46.882955  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:56:46.882975  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:56:46.883312  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:56:46.883493  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:56:46.918699  173440 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:56:46.920719  173440 start.go:297] selected driver: kvm2
	I1028 11:56:46.920736  173440 start.go:901] validating driver "kvm2" against &{Name:test-preload-083517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-083517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:56:46.920847  173440 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:56:46.921598  173440 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:56:46.921669  173440 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 11:56:46.937052  173440 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 11:56:46.937402  173440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:56:46.937431  173440 cni.go:84] Creating CNI manager for ""
	I1028 11:56:46.937474  173440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:56:46.937516  173440 start.go:340] cluster config:
	{Name:test-preload-083517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-083517 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:56:46.937645  173440 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 11:56:46.939800  173440 out.go:177] * Starting "test-preload-083517" primary control-plane node in "test-preload-083517" cluster
	I1028 11:56:46.941262  173440 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1028 11:56:47.071586  173440 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1028 11:56:47.071620  173440 cache.go:56] Caching tarball of preloaded images
	I1028 11:56:47.071784  173440 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1028 11:56:47.073767  173440 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1028 11:56:47.075181  173440 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1028 11:56:47.186227  173440 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1028 11:56:58.730134  173440 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1028 11:56:58.730254  173440 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1028 11:56:59.571218  173440 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1028 11:56:59.571352  173440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/config.json ...
	I1028 11:56:59.571617  173440 start.go:360] acquireMachinesLock for test-preload-083517: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 11:56:59.571691  173440 start.go:364] duration metric: took 51.352µs to acquireMachinesLock for "test-preload-083517"
	I1028 11:56:59.571710  173440 start.go:96] Skipping create...Using existing machine configuration
	I1028 11:56:59.571716  173440 fix.go:54] fixHost starting: 
	I1028 11:56:59.571997  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:56:59.572035  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:56:59.586963  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45381
	I1028 11:56:59.587480  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:56:59.588007  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:56:59.588023  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:56:59.588320  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:56:59.588529  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:56:59.588665  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetState
	I1028 11:56:59.590507  173440 fix.go:112] recreateIfNeeded on test-preload-083517: state=Stopped err=<nil>
	I1028 11:56:59.590551  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	W1028 11:56:59.590732  173440 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 11:56:59.592878  173440 out.go:177] * Restarting existing kvm2 VM for "test-preload-083517" ...
	I1028 11:56:59.594438  173440 main.go:141] libmachine: (test-preload-083517) Calling .Start
	I1028 11:56:59.594606  173440 main.go:141] libmachine: (test-preload-083517) Ensuring networks are active...
	I1028 11:56:59.595398  173440 main.go:141] libmachine: (test-preload-083517) Ensuring network default is active
	I1028 11:56:59.595752  173440 main.go:141] libmachine: (test-preload-083517) Ensuring network mk-test-preload-083517 is active
	I1028 11:56:59.596058  173440 main.go:141] libmachine: (test-preload-083517) Getting domain xml...
	I1028 11:56:59.596982  173440 main.go:141] libmachine: (test-preload-083517) Creating domain...
	I1028 11:57:00.801454  173440 main.go:141] libmachine: (test-preload-083517) Waiting to get IP...
	I1028 11:57:00.802411  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:00.802841  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:00.802930  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:00.802830  173524 retry.go:31] will retry after 285.150134ms: waiting for machine to come up
	I1028 11:57:01.089189  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:01.089698  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:01.089732  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:01.089643  173524 retry.go:31] will retry after 254.865343ms: waiting for machine to come up
	I1028 11:57:01.346291  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:01.346687  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:01.346720  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:01.346641  173524 retry.go:31] will retry after 449.210128ms: waiting for machine to come up
	I1028 11:57:01.797265  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:01.797681  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:01.797708  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:01.797646  173524 retry.go:31] will retry after 573.372364ms: waiting for machine to come up
	I1028 11:57:02.372443  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:02.372844  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:02.372868  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:02.372789  173524 retry.go:31] will retry after 620.163874ms: waiting for machine to come up
	I1028 11:57:02.994285  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:02.994837  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:02.994869  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:02.994780  173524 retry.go:31] will retry after 829.041798ms: waiting for machine to come up
	I1028 11:57:03.825930  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:03.826343  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:03.826372  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:03.826284  173524 retry.go:31] will retry after 850.471039ms: waiting for machine to come up
	I1028 11:57:04.678736  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:04.679164  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:04.679222  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:04.679063  173524 retry.go:31] will retry after 1.260391324s: waiting for machine to come up
	I1028 11:57:05.941355  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:05.941815  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:05.941847  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:05.941753  173524 retry.go:31] will retry after 1.401021183s: waiting for machine to come up
	I1028 11:57:07.344750  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:07.345270  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:07.345300  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:07.345211  173524 retry.go:31] will retry after 2.263113688s: waiting for machine to come up
	I1028 11:57:09.609440  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:09.609813  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:09.609845  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:09.609767  173524 retry.go:31] will retry after 2.860399142s: waiting for machine to come up
	I1028 11:57:12.473832  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:12.474314  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:12.474343  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:12.474238  173524 retry.go:31] will retry after 2.506431484s: waiting for machine to come up
	I1028 11:57:14.982267  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:14.982664  173440 main.go:141] libmachine: (test-preload-083517) DBG | unable to find current IP address of domain test-preload-083517 in network mk-test-preload-083517
	I1028 11:57:14.982696  173440 main.go:141] libmachine: (test-preload-083517) DBG | I1028 11:57:14.982628  173524 retry.go:31] will retry after 4.473588011s: waiting for machine to come up
	I1028 11:57:19.461294  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.461911  173440 main.go:141] libmachine: (test-preload-083517) Found IP for machine: 192.168.39.230
	I1028 11:57:19.461937  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has current primary IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.461945  173440 main.go:141] libmachine: (test-preload-083517) Reserving static IP address...
	I1028 11:57:19.462442  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "test-preload-083517", mac: "52:54:00:ee:ff:eb", ip: "192.168.39.230"} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.462482  173440 main.go:141] libmachine: (test-preload-083517) DBG | skip adding static IP to network mk-test-preload-083517 - found existing host DHCP lease matching {name: "test-preload-083517", mac: "52:54:00:ee:ff:eb", ip: "192.168.39.230"}
	I1028 11:57:19.462496  173440 main.go:141] libmachine: (test-preload-083517) Reserved static IP address: 192.168.39.230
	I1028 11:57:19.462519  173440 main.go:141] libmachine: (test-preload-083517) Waiting for SSH to be available...
	I1028 11:57:19.462538  173440 main.go:141] libmachine: (test-preload-083517) DBG | Getting to WaitForSSH function...
	I1028 11:57:19.464577  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.464851  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.464881  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.465007  173440 main.go:141] libmachine: (test-preload-083517) DBG | Using SSH client type: external
	I1028 11:57:19.465049  173440 main.go:141] libmachine: (test-preload-083517) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa (-rw-------)
	I1028 11:57:19.465085  173440 main.go:141] libmachine: (test-preload-083517) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 11:57:19.465100  173440 main.go:141] libmachine: (test-preload-083517) DBG | About to run SSH command:
	I1028 11:57:19.465115  173440 main.go:141] libmachine: (test-preload-083517) DBG | exit 0
	I1028 11:57:19.585715  173440 main.go:141] libmachine: (test-preload-083517) DBG | SSH cmd err, output: <nil>: 
	I1028 11:57:19.586082  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetConfigRaw
	I1028 11:57:19.586700  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetIP
	I1028 11:57:19.588918  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.589294  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.589319  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.589641  173440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/config.json ...
	I1028 11:57:19.589861  173440 machine.go:93] provisionDockerMachine start ...
	I1028 11:57:19.589880  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:19.590080  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:19.592426  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.592746  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.592772  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.592864  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:19.593064  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:19.593203  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:19.593299  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:19.593430  173440 main.go:141] libmachine: Using SSH client type: native
	I1028 11:57:19.593652  173440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 11:57:19.593664  173440 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 11:57:19.693987  173440 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 11:57:19.694027  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetMachineName
	I1028 11:57:19.694284  173440 buildroot.go:166] provisioning hostname "test-preload-083517"
	I1028 11:57:19.694325  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetMachineName
	I1028 11:57:19.694489  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:19.697178  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.697490  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.697513  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.697688  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:19.697836  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:19.697960  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:19.698202  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:19.698366  173440 main.go:141] libmachine: Using SSH client type: native
	I1028 11:57:19.698530  173440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 11:57:19.698540  173440 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-083517 && echo "test-preload-083517" | sudo tee /etc/hostname
	I1028 11:57:19.812958  173440 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-083517
	
	I1028 11:57:19.813005  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:19.816327  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.816826  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.816860  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.817012  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:19.817195  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:19.817410  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:19.817576  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:19.817753  173440 main.go:141] libmachine: Using SSH client type: native
	I1028 11:57:19.817933  173440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 11:57:19.817951  173440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-083517' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-083517/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-083517' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 11:57:19.927386  173440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 11:57:19.927415  173440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 11:57:19.927446  173440 buildroot.go:174] setting up certificates
	I1028 11:57:19.927457  173440 provision.go:84] configureAuth start
	I1028 11:57:19.927467  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetMachineName
	I1028 11:57:19.927736  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetIP
	I1028 11:57:19.930718  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.931051  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.931089  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.931249  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:19.933880  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.934199  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:19.934234  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:19.934442  173440 provision.go:143] copyHostCerts
	I1028 11:57:19.934523  173440 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 11:57:19.934539  173440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 11:57:19.934618  173440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 11:57:19.934737  173440 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 11:57:19.934747  173440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 11:57:19.934784  173440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 11:57:19.934860  173440 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 11:57:19.934871  173440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 11:57:19.934902  173440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 11:57:19.934967  173440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.test-preload-083517 san=[127.0.0.1 192.168.39.230 localhost minikube test-preload-083517]
	I1028 11:57:20.093643  173440 provision.go:177] copyRemoteCerts
	I1028 11:57:20.093698  173440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 11:57:20.093724  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:20.096624  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.096932  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.096967  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.097150  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:20.097363  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.097521  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:20.097658  173440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa Username:docker}
	I1028 11:57:20.180581  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 11:57:20.207074  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 11:57:20.234562  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 11:57:20.260935  173440 provision.go:87] duration metric: took 333.462101ms to configureAuth
	I1028 11:57:20.260973  173440 buildroot.go:189] setting minikube options for container-runtime
	I1028 11:57:20.261166  173440 config.go:182] Loaded profile config "test-preload-083517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1028 11:57:20.261253  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:20.263847  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.264174  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.264217  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.264375  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:20.264565  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.264694  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.264808  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:20.264935  173440 main.go:141] libmachine: Using SSH client type: native
	I1028 11:57:20.265092  173440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 11:57:20.265106  173440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 11:57:20.485306  173440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 11:57:20.485344  173440 machine.go:96] duration metric: took 895.466607ms to provisionDockerMachine
	I1028 11:57:20.485358  173440 start.go:293] postStartSetup for "test-preload-083517" (driver="kvm2")
	I1028 11:57:20.485369  173440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 11:57:20.485385  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:20.485739  173440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 11:57:20.485780  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:20.488529  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.488856  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.488878  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.489093  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:20.489296  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.489490  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:20.489634  173440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa Username:docker}
	I1028 11:57:20.573075  173440 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 11:57:20.577845  173440 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 11:57:20.577876  173440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 11:57:20.577952  173440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 11:57:20.578028  173440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 11:57:20.578126  173440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 11:57:20.588728  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:57:20.614922  173440 start.go:296] duration metric: took 129.546498ms for postStartSetup
	I1028 11:57:20.614968  173440 fix.go:56] duration metric: took 21.043251529s for fixHost
	I1028 11:57:20.614989  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:20.617747  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.618095  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.618128  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.618314  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:20.618532  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.618692  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.618844  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:20.619013  173440 main.go:141] libmachine: Using SSH client type: native
	I1028 11:57:20.619216  173440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1028 11:57:20.619229  173440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 11:57:20.718775  173440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116640.694298912
	
	I1028 11:57:20.718804  173440 fix.go:216] guest clock: 1730116640.694298912
	I1028 11:57:20.718814  173440 fix.go:229] Guest: 2024-10-28 11:57:20.694298912 +0000 UTC Remote: 2024-10-28 11:57:20.614971978 +0000 UTC m=+33.823542457 (delta=79.326934ms)
	I1028 11:57:20.718836  173440 fix.go:200] guest clock delta is within tolerance: 79.326934ms
	I1028 11:57:20.718842  173440 start.go:83] releasing machines lock for "test-preload-083517", held for 21.147139392s
	I1028 11:57:20.718866  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:20.719163  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetIP
	I1028 11:57:20.721915  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.722256  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.722278  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.722421  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:20.722896  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:20.723076  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:20.723225  173440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 11:57:20.723271  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:20.723302  173440 ssh_runner.go:195] Run: cat /version.json
	I1028 11:57:20.723327  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:20.725822  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.726166  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.726193  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.726258  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.726383  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:20.726569  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.726729  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:20.726750  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:20.726753  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:20.726921  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:20.726930  173440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa Username:docker}
	I1028 11:57:20.727083  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:20.727224  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:20.727341  173440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa Username:docker}
	I1028 11:57:20.823985  173440 ssh_runner.go:195] Run: systemctl --version
	I1028 11:57:20.830511  173440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 11:57:20.990466  173440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 11:57:20.996905  173440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 11:57:20.996990  173440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 11:57:21.014489  173440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 11:57:21.014520  173440 start.go:495] detecting cgroup driver to use...
	I1028 11:57:21.014598  173440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 11:57:21.033511  173440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 11:57:21.048554  173440 docker.go:217] disabling cri-docker service (if available) ...
	I1028 11:57:21.048626  173440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 11:57:21.063103  173440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 11:57:21.077513  173440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 11:57:21.191741  173440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 11:57:21.364791  173440 docker.go:233] disabling docker service ...
	I1028 11:57:21.364875  173440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 11:57:21.379471  173440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 11:57:21.393378  173440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 11:57:21.516316  173440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 11:57:21.636512  173440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 11:57:21.652172  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 11:57:21.672505  173440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1028 11:57:21.672576  173440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.683458  173440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 11:57:21.683545  173440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.694436  173440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.705056  173440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.716285  173440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 11:57:21.727441  173440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.738410  173440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.757311  173440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 11:57:21.768269  173440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 11:57:21.777947  173440 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 11:57:21.778004  173440 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 11:57:21.791059  173440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 11:57:21.801719  173440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:57:21.917668  173440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 11:57:22.008523  173440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 11:57:22.008598  173440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 11:57:22.013693  173440 start.go:563] Will wait 60s for crictl version
	I1028 11:57:22.013744  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:22.017487  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 11:57:22.054754  173440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 11:57:22.054863  173440 ssh_runner.go:195] Run: crio --version
	I1028 11:57:22.083516  173440 ssh_runner.go:195] Run: crio --version
	I1028 11:57:22.115846  173440 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1028 11:57:22.117377  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetIP
	I1028 11:57:22.120242  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:22.120580  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:22.120615  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:22.120911  173440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 11:57:22.125467  173440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:57:22.139038  173440 kubeadm.go:883] updating cluster {Name:test-preload-083517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-083517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 11:57:22.139148  173440 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1028 11:57:22.139196  173440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:57:22.179114  173440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1028 11:57:22.179178  173440 ssh_runner.go:195] Run: which lz4
	I1028 11:57:22.183473  173440 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 11:57:22.187668  173440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 11:57:22.187699  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1028 11:57:23.852896  173440 crio.go:462] duration metric: took 1.669465284s to copy over tarball
	I1028 11:57:23.852975  173440 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 11:57:26.308595  173440 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.45558956s)
	I1028 11:57:26.308627  173440 crio.go:469] duration metric: took 2.455700829s to extract the tarball
	I1028 11:57:26.308637  173440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 11:57:26.350735  173440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 11:57:26.397360  173440 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1028 11:57:26.397386  173440 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 11:57:26.397455  173440 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:57:26.397497  173440 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.397534  173440 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:26.397572  173440 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1028 11:57:26.397592  173440 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.397502  173440 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:26.397540  173440 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:26.397462  173440 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:26.398994  173440 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1028 11:57:26.399068  173440 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.399084  173440 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:57:26.399066  173440 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:26.399148  173440 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:26.399068  173440 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.399205  173440 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:26.399183  173440 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:26.557253  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.562891  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.584977  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1028 11:57:26.600378  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:26.604859  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:26.610851  173440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1028 11:57:26.610896  173440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.610941  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.635108  173440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1028 11:57:26.635152  173440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.635201  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.642037  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:26.701051  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:26.704685  173440 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1028 11:57:26.704720  173440 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1028 11:57:26.704761  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.711188  173440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1028 11:57:26.711227  173440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:26.711261  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.711273  173440 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1028 11:57:26.711311  173440 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:26.711362  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.711365  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.711385  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.766017  173440 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1028 11:57:26.766066  173440 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:26.766122  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.776789  173440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1028 11:57:26.776831  173440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:26.776871  173440 ssh_runner.go:195] Run: which crictl
	I1028 11:57:26.776924  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 11:57:26.814140  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.816849  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:26.817050  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.817062  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:26.817118  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:26.817144  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:26.823977  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 11:57:26.931659  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1028 11:57:26.968968  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:26.975822  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1028 11:57:26.981485  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:26.981579  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:26.981631  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:26.988448  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1028 11:57:27.072828  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1028 11:57:27.072945  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1028 11:57:27.123344  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1028 11:57:27.146700  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1028 11:57:27.146720  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1028 11:57:27.146760  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1028 11:57:27.146818  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1028 11:57:27.157452  173440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1028 11:57:27.157510  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1028 11:57:27.157539  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1028 11:57:27.157553  173440 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1028 11:57:27.157599  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1028 11:57:27.157606  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1028 11:57:27.224071  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1028 11:57:27.224212  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1028 11:57:27.258669  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1028 11:57:27.258729  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1028 11:57:27.258771  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1028 11:57:27.258793  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1028 11:57:27.258828  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1028 11:57:27.272767  173440 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1028 11:57:27.272864  173440 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1028 11:57:27.547459  173440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:57:29.815491  173440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.657861408s)
	I1028 11:57:29.815533  173440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.657909847s)
	I1028 11:57:29.815560  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1028 11:57:29.815540  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1028 11:57:29.815590  173440 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1028 11:57:29.815635  173440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.556793475s)
	I1028 11:57:29.815659  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1028 11:57:29.815638  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1028 11:57:29.815663  173440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.556854821s)
	I1028 11:57:29.815680  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1028 11:57:29.815713  173440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.542835781s)
	I1028 11:57:29.815723  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1028 11:57:29.815591  173440 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.59136038s)
	I1028 11:57:29.815740  173440 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1028 11:57:29.815802  173440 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.268309117s)
	I1028 11:57:30.558768  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1028 11:57:30.558824  173440 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1028 11:57:30.558882  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1028 11:57:30.706595  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1028 11:57:30.706647  173440 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1028 11:57:30.706713  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1028 11:57:31.155664  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1028 11:57:31.155730  173440 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1028 11:57:31.155790  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1028 11:57:31.600196  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1028 11:57:31.600271  173440 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1028 11:57:31.600365  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1028 11:57:33.653692  173440 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.053293499s)
	I1028 11:57:33.653729  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1028 11:57:33.653763  173440 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1028 11:57:33.653821  173440 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1028 11:57:34.516995  173440 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1028 11:57:34.517050  173440 cache_images.go:123] Successfully loaded all cached images
	I1028 11:57:34.517058  173440 cache_images.go:92] duration metric: took 8.119658781s to LoadCachedImages
	I1028 11:57:34.517075  173440 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.24.4 crio true true} ...
	I1028 11:57:34.517169  173440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-083517 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-083517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 11:57:34.517231  173440 ssh_runner.go:195] Run: crio config
	I1028 11:57:34.570866  173440 cni.go:84] Creating CNI manager for ""
	I1028 11:57:34.570895  173440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:57:34.570908  173440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 11:57:34.570932  173440 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-083517 NodeName:test-preload-083517 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 11:57:34.571107  173440 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-083517"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 11:57:34.571185  173440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1028 11:57:34.582129  173440 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 11:57:34.582265  173440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 11:57:34.592566  173440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1028 11:57:34.610213  173440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 11:57:34.627750  173440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1028 11:57:34.646232  173440 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1028 11:57:34.650213  173440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 11:57:34.663408  173440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:57:34.792039  173440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:57:34.810469  173440 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517 for IP: 192.168.39.230
	I1028 11:57:34.810498  173440 certs.go:194] generating shared ca certs ...
	I1028 11:57:34.810538  173440 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:57:34.810722  173440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 11:57:34.810779  173440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 11:57:34.810794  173440 certs.go:256] generating profile certs ...
	I1028 11:57:34.810910  173440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/client.key
	I1028 11:57:34.810998  173440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/apiserver.key.e7923fb1
	I1028 11:57:34.811054  173440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/proxy-client.key
	I1028 11:57:34.811196  173440 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 11:57:34.811245  173440 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 11:57:34.811261  173440 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 11:57:34.811318  173440 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 11:57:34.811359  173440 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 11:57:34.811389  173440 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 11:57:34.811450  173440 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 11:57:34.812347  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 11:57:34.855703  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 11:57:34.891813  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 11:57:34.922148  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 11:57:34.965600  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 11:57:35.010178  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 11:57:35.049297  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 11:57:35.074484  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 11:57:35.099808  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 11:57:35.124592  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 11:57:35.149350  173440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 11:57:35.174095  173440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 11:57:35.191787  173440 ssh_runner.go:195] Run: openssl version
	I1028 11:57:35.197764  173440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 11:57:35.209324  173440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 11:57:35.214019  173440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 11:57:35.214089  173440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 11:57:35.220001  173440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 11:57:35.231298  173440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 11:57:35.242412  173440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:57:35.247067  173440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:57:35.247139  173440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 11:57:35.252897  173440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 11:57:35.264247  173440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 11:57:35.275719  173440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 11:57:35.280414  173440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 11:57:35.280500  173440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 11:57:35.286598  173440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 11:57:35.298667  173440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 11:57:35.303672  173440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 11:57:35.309927  173440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 11:57:35.315992  173440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 11:57:35.322229  173440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 11:57:35.328549  173440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 11:57:35.334940  173440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 11:57:35.341217  173440 kubeadm.go:392] StartCluster: {Name:test-preload-083517 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-083517 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:57:35.341313  173440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 11:57:35.341387  173440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:57:35.383606  173440 cri.go:89] found id: ""
	I1028 11:57:35.383690  173440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 11:57:35.394784  173440 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 11:57:35.394810  173440 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 11:57:35.394900  173440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 11:57:35.405368  173440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:57:35.405831  173440 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-083517" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:57:35.405968  173440 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-083517" cluster setting kubeconfig missing "test-preload-083517" context setting]
	I1028 11:57:35.406262  173440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:57:35.406886  173440 kapi.go:59] client config for test-preload-083517: &rest.Config{Host:"https://192.168.39.230:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:57:35.407542  173440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 11:57:35.418039  173440 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1028 11:57:35.418073  173440 kubeadm.go:1160] stopping kube-system containers ...
	I1028 11:57:35.418088  173440 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 11:57:35.418140  173440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 11:57:35.461154  173440 cri.go:89] found id: ""
	I1028 11:57:35.461228  173440 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 11:57:35.478665  173440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 11:57:35.489130  173440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 11:57:35.489160  173440 kubeadm.go:157] found existing configuration files:
	
	I1028 11:57:35.489226  173440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 11:57:35.499359  173440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 11:57:35.499423  173440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 11:57:35.509939  173440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 11:57:35.519895  173440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 11:57:35.519968  173440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 11:57:35.530488  173440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 11:57:35.540468  173440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 11:57:35.540526  173440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 11:57:35.550902  173440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 11:57:35.561350  173440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 11:57:35.561406  173440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 11:57:35.571818  173440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 11:57:35.582216  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:57:35.690650  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:57:36.684945  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:57:36.979427  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:57:37.045944  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:57:37.139418  173440 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:57:37.139517  173440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:57:37.640363  173440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:57:38.139855  173440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:57:38.187853  173440 api_server.go:72] duration metric: took 1.048433052s to wait for apiserver process to appear ...
	I1028 11:57:38.187880  173440 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:57:38.187915  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:38.188429  173440 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1028 11:57:38.687962  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:38.688516  173440 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1028 11:57:39.188076  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:42.715589  173440 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 11:57:42.715619  173440 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 11:57:42.715633  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:42.760044  173440 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 11:57:42.760083  173440 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 11:57:43.188679  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:43.200062  173440 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 11:57:43.200091  173440 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 11:57:43.688732  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:43.694504  173440 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 11:57:43.694554  173440 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 11:57:44.188067  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:44.194166  173440 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 11:57:44.201257  173440 api_server.go:141] control plane version: v1.24.4
	I1028 11:57:44.201295  173440 api_server.go:131] duration metric: took 6.013405711s to wait for apiserver health ...
	I1028 11:57:44.201305  173440 cni.go:84] Creating CNI manager for ""
	I1028 11:57:44.201320  173440 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 11:57:44.203262  173440 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 11:57:44.204869  173440 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 11:57:44.217445  173440 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 11:57:44.256993  173440 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:57:44.257106  173440 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 11:57:44.257123  173440 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 11:57:44.268741  173440 system_pods.go:59] 7 kube-system pods found
	I1028 11:57:44.268776  173440 system_pods.go:61] "coredns-6d4b75cb6d-skkbc" [70dd8180-76ad-4162-a4e0-0dda4601739a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 11:57:44.268783  173440 system_pods.go:61] "etcd-test-preload-083517" [4fadc798-cf6a-4823-84f7-6e6e059d1c8d] Running
	I1028 11:57:44.268789  173440 system_pods.go:61] "kube-apiserver-test-preload-083517" [6fc92a78-8848-4e41-87a5-6b5bcaad5f53] Running
	I1028 11:57:44.268793  173440 system_pods.go:61] "kube-controller-manager-test-preload-083517" [ac86c653-30f1-47a4-8b98-40d65060ec52] Running
	I1028 11:57:44.268798  173440 system_pods.go:61] "kube-proxy-f8qvv" [abf41645-f094-4078-a00b-100a55ed83d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 11:57:44.268806  173440 system_pods.go:61] "kube-scheduler-test-preload-083517" [8686939b-04ba-4843-a01f-3393ae59a9d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 11:57:44.268812  173440 system_pods.go:61] "storage-provisioner" [07aab3fa-d57c-42c2-bd28-ddc163dc7be2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 11:57:44.268819  173440 system_pods.go:74] duration metric: took 11.805259ms to wait for pod list to return data ...
	I1028 11:57:44.268827  173440 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:57:44.272431  173440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:57:44.272466  173440 node_conditions.go:123] node cpu capacity is 2
	I1028 11:57:44.272480  173440 node_conditions.go:105] duration metric: took 3.647823ms to run NodePressure ...
	I1028 11:57:44.272502  173440 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 11:57:44.556293  173440 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 11:57:44.564268  173440 kubeadm.go:739] kubelet initialised
	I1028 11:57:44.564299  173440 kubeadm.go:740] duration metric: took 7.974311ms waiting for restarted kubelet to initialise ...
	I1028 11:57:44.564312  173440 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:57:44.570441  173440 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:44.583592  173440 pod_ready.go:98] node "test-preload-083517" hosting pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.583626  173440 pod_ready.go:82] duration metric: took 13.156838ms for pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace to be "Ready" ...
	E1028 11:57:44.583640  173440 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-083517" hosting pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.583660  173440 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:44.588799  173440 pod_ready.go:98] node "test-preload-083517" hosting pod "etcd-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.588830  173440 pod_ready.go:82] duration metric: took 5.154823ms for pod "etcd-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	E1028 11:57:44.588844  173440 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-083517" hosting pod "etcd-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.588852  173440 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:44.593347  173440 pod_ready.go:98] node "test-preload-083517" hosting pod "kube-apiserver-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.593377  173440 pod_ready.go:82] duration metric: took 4.512748ms for pod "kube-apiserver-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	E1028 11:57:44.593389  173440 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-083517" hosting pod "kube-apiserver-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.593397  173440 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:44.661032  173440 pod_ready.go:98] node "test-preload-083517" hosting pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.661062  173440 pod_ready.go:82] duration metric: took 67.652735ms for pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	E1028 11:57:44.661075  173440 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-083517" hosting pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:44.661084  173440 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f8qvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:45.060457  173440 pod_ready.go:98] node "test-preload-083517" hosting pod "kube-proxy-f8qvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:45.060487  173440 pod_ready.go:82] duration metric: took 399.392333ms for pod "kube-proxy-f8qvv" in "kube-system" namespace to be "Ready" ...
	E1028 11:57:45.060500  173440 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-083517" hosting pod "kube-proxy-f8qvv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:45.060508  173440 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:45.459634  173440 pod_ready.go:98] node "test-preload-083517" hosting pod "kube-scheduler-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:45.459667  173440 pod_ready.go:82] duration metric: took 399.150788ms for pod "kube-scheduler-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	E1028 11:57:45.459680  173440 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-083517" hosting pod "kube-scheduler-test-preload-083517" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:45.459691  173440 pod_ready.go:39] duration metric: took 895.366597ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:57:45.459724  173440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 11:57:45.472954  173440 ops.go:34] apiserver oom_adj: -16
	I1028 11:57:45.472986  173440 kubeadm.go:597] duration metric: took 10.078168658s to restartPrimaryControlPlane
	I1028 11:57:45.472997  173440 kubeadm.go:394] duration metric: took 10.131789767s to StartCluster
	I1028 11:57:45.473015  173440 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:57:45.473088  173440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:57:45.473729  173440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 11:57:45.473957  173440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 11:57:45.474084  173440 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 11:57:45.474183  173440 addons.go:69] Setting storage-provisioner=true in profile "test-preload-083517"
	I1028 11:57:45.474197  173440 addons.go:234] Setting addon storage-provisioner=true in "test-preload-083517"
	I1028 11:57:45.474198  173440 config.go:182] Loaded profile config "test-preload-083517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1028 11:57:45.474213  173440 addons.go:69] Setting default-storageclass=true in profile "test-preload-083517"
	W1028 11:57:45.474203  173440 addons.go:243] addon storage-provisioner should already be in state true
	I1028 11:57:45.474240  173440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-083517"
	I1028 11:57:45.474262  173440 host.go:66] Checking if "test-preload-083517" exists ...
	I1028 11:57:45.474720  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:57:45.474756  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:57:45.474758  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:57:45.474797  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:57:45.475797  173440 out.go:177] * Verifying Kubernetes components...
	I1028 11:57:45.477279  173440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 11:57:45.490486  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I1028 11:57:45.490571  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1028 11:57:45.491011  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:57:45.491056  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:57:45.491596  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:57:45.491617  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:57:45.491755  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:57:45.491778  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:57:45.491954  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:57:45.492157  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:57:45.492303  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetState
	I1028 11:57:45.492492  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:57:45.492538  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:57:45.494675  173440 kapi.go:59] client config for test-preload-083517: &rest.Config{Host:"https://192.168.39.230:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/client.crt", KeyFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/profiles/test-preload-083517/client.key", CAFile:"/home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1028 11:57:45.494943  173440 addons.go:234] Setting addon default-storageclass=true in "test-preload-083517"
	W1028 11:57:45.494957  173440 addons.go:243] addon default-storageclass should already be in state true
	I1028 11:57:45.494982  173440 host.go:66] Checking if "test-preload-083517" exists ...
	I1028 11:57:45.495278  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:57:45.495320  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:57:45.507872  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I1028 11:57:45.508607  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:57:45.509219  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:57:45.509245  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:57:45.509670  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:57:45.509682  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I1028 11:57:45.509877  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetState
	I1028 11:57:45.510077  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:57:45.510573  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:57:45.510596  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:57:45.510933  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:57:45.511485  173440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:57:45.511526  173440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:57:45.511796  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:45.514179  173440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 11:57:45.515718  173440 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:57:45.515739  173440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 11:57:45.515759  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:45.519226  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:45.519726  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:45.519767  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:45.519916  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:45.520118  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:45.520292  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:45.520428  173440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa Username:docker}
	I1028 11:57:45.552789  173440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I1028 11:57:45.553275  173440 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:57:45.553811  173440 main.go:141] libmachine: Using API Version  1
	I1028 11:57:45.553849  173440 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:57:45.554242  173440 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:57:45.554470  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetState
	I1028 11:57:45.556381  173440 main.go:141] libmachine: (test-preload-083517) Calling .DriverName
	I1028 11:57:45.556596  173440 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 11:57:45.556611  173440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 11:57:45.556630  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHHostname
	I1028 11:57:45.559635  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:45.560028  173440 main.go:141] libmachine: (test-preload-083517) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:ff:eb", ip: ""} in network mk-test-preload-083517: {Iface:virbr1 ExpiryTime:2024-10-28 12:55:14 +0000 UTC Type:0 Mac:52:54:00:ee:ff:eb Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-083517 Clientid:01:52:54:00:ee:ff:eb}
	I1028 11:57:45.560062  173440 main.go:141] libmachine: (test-preload-083517) DBG | domain test-preload-083517 has defined IP address 192.168.39.230 and MAC address 52:54:00:ee:ff:eb in network mk-test-preload-083517
	I1028 11:57:45.560187  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHPort
	I1028 11:57:45.560398  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHKeyPath
	I1028 11:57:45.560532  173440 main.go:141] libmachine: (test-preload-083517) Calling .GetSSHUsername
	I1028 11:57:45.560670  173440 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/test-preload-083517/id_rsa Username:docker}
	I1028 11:57:45.669645  173440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 11:57:45.685956  173440 node_ready.go:35] waiting up to 6m0s for node "test-preload-083517" to be "Ready" ...
	I1028 11:57:45.813951  173440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 11:57:45.826491  173440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 11:57:46.886731  173440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060204414s)
	I1028 11:57:46.886792  173440 main.go:141] libmachine: Making call to close driver server
	I1028 11:57:46.886802  173440 main.go:141] libmachine: (test-preload-083517) Calling .Close
	I1028 11:57:46.886825  173440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.072835299s)
	I1028 11:57:46.886861  173440 main.go:141] libmachine: Making call to close driver server
	I1028 11:57:46.886878  173440 main.go:141] libmachine: (test-preload-083517) Calling .Close
	I1028 11:57:46.887080  173440 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:57:46.887100  173440 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:57:46.887109  173440 main.go:141] libmachine: Making call to close driver server
	I1028 11:57:46.887116  173440 main.go:141] libmachine: (test-preload-083517) Calling .Close
	I1028 11:57:46.887234  173440 main.go:141] libmachine: (test-preload-083517) DBG | Closing plugin on server side
	I1028 11:57:46.887237  173440 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:57:46.887263  173440 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:57:46.887271  173440 main.go:141] libmachine: Making call to close driver server
	I1028 11:57:46.887279  173440 main.go:141] libmachine: (test-preload-083517) Calling .Close
	I1028 11:57:46.887351  173440 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:57:46.887363  173440 main.go:141] libmachine: (test-preload-083517) DBG | Closing plugin on server side
	I1028 11:57:46.887367  173440 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:57:46.887499  173440 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:57:46.887509  173440 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:57:46.894176  173440 main.go:141] libmachine: Making call to close driver server
	I1028 11:57:46.894198  173440 main.go:141] libmachine: (test-preload-083517) Calling .Close
	I1028 11:57:46.894441  173440 main.go:141] libmachine: Successfully made call to close driver server
	I1028 11:57:46.894447  173440 main.go:141] libmachine: (test-preload-083517) DBG | Closing plugin on server side
	I1028 11:57:46.894456  173440 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 11:57:46.896199  173440 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1028 11:57:46.897519  173440 addons.go:510] duration metric: took 1.423449363s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1028 11:57:47.690170  173440 node_ready.go:53] node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:50.189924  173440 node_ready.go:53] node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:52.689554  173440 node_ready.go:53] node "test-preload-083517" has status "Ready":"False"
	I1028 11:57:53.189699  173440 node_ready.go:49] node "test-preload-083517" has status "Ready":"True"
	I1028 11:57:53.189722  173440 node_ready.go:38] duration metric: took 7.50373203s for node "test-preload-083517" to be "Ready" ...
	I1028 11:57:53.189731  173440 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:57:53.194360  173440 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.199186  173440 pod_ready.go:93] pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace has status "Ready":"True"
	I1028 11:57:53.199206  173440 pod_ready.go:82] duration metric: took 4.824608ms for pod "coredns-6d4b75cb6d-skkbc" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.199215  173440 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.705500  173440 pod_ready.go:93] pod "etcd-test-preload-083517" in "kube-system" namespace has status "Ready":"True"
	I1028 11:57:53.705538  173440 pod_ready.go:82] duration metric: took 506.30303ms for pod "etcd-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.705548  173440 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.709430  173440 pod_ready.go:93] pod "kube-apiserver-test-preload-083517" in "kube-system" namespace has status "Ready":"True"
	I1028 11:57:53.709446  173440 pod_ready.go:82] duration metric: took 3.89171ms for pod "kube-apiserver-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.709454  173440 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.713370  173440 pod_ready.go:93] pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace has status "Ready":"True"
	I1028 11:57:53.713387  173440 pod_ready.go:82] duration metric: took 3.927898ms for pod "kube-controller-manager-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.713400  173440 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f8qvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.990900  173440 pod_ready.go:93] pod "kube-proxy-f8qvv" in "kube-system" namespace has status "Ready":"True"
	I1028 11:57:53.990925  173440 pod_ready.go:82] duration metric: took 277.519907ms for pod "kube-proxy-f8qvv" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:53.990936  173440 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:55.998739  173440 pod_ready.go:103] pod "kube-scheduler-test-preload-083517" in "kube-system" namespace has status "Ready":"False"
	I1028 11:57:57.496908  173440 pod_ready.go:93] pod "kube-scheduler-test-preload-083517" in "kube-system" namespace has status "Ready":"True"
	I1028 11:57:57.496932  173440 pod_ready.go:82] duration metric: took 3.505990659s for pod "kube-scheduler-test-preload-083517" in "kube-system" namespace to be "Ready" ...
	I1028 11:57:57.496944  173440 pod_ready.go:39] duration metric: took 4.307203636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 11:57:57.496964  173440 api_server.go:52] waiting for apiserver process to appear ...
	I1028 11:57:57.497010  173440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:57:57.512206  173440 api_server.go:72] duration metric: took 12.038216215s to wait for apiserver process to appear ...
	I1028 11:57:57.512239  173440 api_server.go:88] waiting for apiserver healthz status ...
	I1028 11:57:57.512261  173440 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1028 11:57:57.517799  173440 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1028 11:57:57.518668  173440 api_server.go:141] control plane version: v1.24.4
	I1028 11:57:57.518690  173440 api_server.go:131] duration metric: took 6.443613ms to wait for apiserver health ...
	I1028 11:57:57.518698  173440 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 11:57:57.522945  173440 system_pods.go:59] 7 kube-system pods found
	I1028 11:57:57.522967  173440 system_pods.go:61] "coredns-6d4b75cb6d-skkbc" [70dd8180-76ad-4162-a4e0-0dda4601739a] Running
	I1028 11:57:57.522971  173440 system_pods.go:61] "etcd-test-preload-083517" [4fadc798-cf6a-4823-84f7-6e6e059d1c8d] Running
	I1028 11:57:57.522975  173440 system_pods.go:61] "kube-apiserver-test-preload-083517" [6fc92a78-8848-4e41-87a5-6b5bcaad5f53] Running
	I1028 11:57:57.522978  173440 system_pods.go:61] "kube-controller-manager-test-preload-083517" [ac86c653-30f1-47a4-8b98-40d65060ec52] Running
	I1028 11:57:57.522981  173440 system_pods.go:61] "kube-proxy-f8qvv" [abf41645-f094-4078-a00b-100a55ed83d8] Running
	I1028 11:57:57.522984  173440 system_pods.go:61] "kube-scheduler-test-preload-083517" [8686939b-04ba-4843-a01f-3393ae59a9d4] Running
	I1028 11:57:57.522989  173440 system_pods.go:61] "storage-provisioner" [07aab3fa-d57c-42c2-bd28-ddc163dc7be2] Running
	I1028 11:57:57.522997  173440 system_pods.go:74] duration metric: took 4.29292ms to wait for pod list to return data ...
	I1028 11:57:57.523009  173440 default_sa.go:34] waiting for default service account to be created ...
	I1028 11:57:57.590332  173440 default_sa.go:45] found service account: "default"
	I1028 11:57:57.590358  173440 default_sa.go:55] duration metric: took 67.342378ms for default service account to be created ...
	I1028 11:57:57.590367  173440 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 11:57:57.793171  173440 system_pods.go:86] 7 kube-system pods found
	I1028 11:57:57.793214  173440 system_pods.go:89] "coredns-6d4b75cb6d-skkbc" [70dd8180-76ad-4162-a4e0-0dda4601739a] Running
	I1028 11:57:57.793220  173440 system_pods.go:89] "etcd-test-preload-083517" [4fadc798-cf6a-4823-84f7-6e6e059d1c8d] Running
	I1028 11:57:57.793225  173440 system_pods.go:89] "kube-apiserver-test-preload-083517" [6fc92a78-8848-4e41-87a5-6b5bcaad5f53] Running
	I1028 11:57:57.793229  173440 system_pods.go:89] "kube-controller-manager-test-preload-083517" [ac86c653-30f1-47a4-8b98-40d65060ec52] Running
	I1028 11:57:57.793232  173440 system_pods.go:89] "kube-proxy-f8qvv" [abf41645-f094-4078-a00b-100a55ed83d8] Running
	I1028 11:57:57.793236  173440 system_pods.go:89] "kube-scheduler-test-preload-083517" [8686939b-04ba-4843-a01f-3393ae59a9d4] Running
	I1028 11:57:57.793239  173440 system_pods.go:89] "storage-provisioner" [07aab3fa-d57c-42c2-bd28-ddc163dc7be2] Running
	I1028 11:57:57.793245  173440 system_pods.go:126] duration metric: took 202.873371ms to wait for k8s-apps to be running ...
	I1028 11:57:57.793253  173440 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 11:57:57.793307  173440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:57:57.809275  173440 system_svc.go:56] duration metric: took 16.009587ms WaitForService to wait for kubelet
	I1028 11:57:57.809317  173440 kubeadm.go:582] duration metric: took 12.335329877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 11:57:57.809341  173440 node_conditions.go:102] verifying NodePressure condition ...
	I1028 11:57:57.990759  173440 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 11:57:57.990783  173440 node_conditions.go:123] node cpu capacity is 2
	I1028 11:57:57.990792  173440 node_conditions.go:105] duration metric: took 181.446387ms to run NodePressure ...
	I1028 11:57:57.990804  173440 start.go:241] waiting for startup goroutines ...
	I1028 11:57:57.990811  173440 start.go:246] waiting for cluster config update ...
	I1028 11:57:57.990822  173440 start.go:255] writing updated cluster config ...
	I1028 11:57:57.991103  173440 ssh_runner.go:195] Run: rm -f paused
	I1028 11:57:58.038005  173440 start.go:600] kubectl: 1.31.2, cluster: 1.24.4 (minor skew: 7)
	I1028 11:57:58.039949  173440 out.go:201] 
	W1028 11:57:58.041227  173440 out.go:270] ! /usr/local/bin/kubectl is version 1.31.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1028 11:57:58.042453  173440 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1028 11:57:58.043681  173440 out.go:177] * Done! kubectl is now configured to use "test-preload-083517" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.941349243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116678941325803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f99cb0c-3788-434e-8914-e84608bd5983 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.942107511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3a2bc8a-85a8-4275-ae4f-e14bceba9634 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.942176313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3a2bc8a-85a8-4275-ae4f-e14bceba9634 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.942360911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dfcaf8af91be1b76c16056f65a72296e7ccebcec342716abca978c17bc3037b,PodSandboxId:7e94c24242a56666a687f473c3c7e9083a35234637aa23a156ce0934b7a03b56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730116671197754340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-skkbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70dd8180-76ad-4162-a4e0-0dda4601739a,},Annotations:map[string]string{io.kubernetes.container.hash: b82ab926,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095704c1378e651bbbd0f2144b8d237f30ae8e7628fc2a37a44f91e211d1bf2f,PodSandboxId:1ef5b142abd5f4006f05cf1783cb475cbeca7e8d0b6f5660a5080c8b9bc725e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730116664155516003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f8qvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: abf41645-f094-4078-a00b-100a55ed83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2fa7be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5b29d3adfc91bd965a0e39a7a416c7e10a3b2ced56d9d132401f7695aeb027,PodSandboxId:e54960d1efd048bcc33655bd1beee30f601c64604764aa542e6e828afea7dd20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116663886916123,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07
aab3fa-d57c-42c2-bd28-ddc163dc7be2,},Annotations:map[string]string{io.kubernetes.container.hash: 62f21e33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608442e75b51e3d918ef007081eeca248899b8ef365f92f51f5a21a94def56f6,PodSandboxId:d8d33b5aba13318227fc0d56e6577e8060a3185a31c96759db784cd6a780d5bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730116657980644040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bbb0fe7aa0538c49988b66aa56446ec,},Anno
tations:map[string]string{io.kubernetes.container.hash: 54b9989d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d91fdd963f2a93d11b82255cf12486fcd71945096b7337f1fb9318afdd7951,PodSandboxId:29c892f055afa0d6652c0a871a2c13fe14b0a84a35cc7b2f5fadba843ac71936,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730116657973653668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1eb687a1211312bb8d4e2
405008cb4,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379d6610b22e0b36425c41110ed9b1fb06b6368335aa6cc55f7bc13a7a451c01,PodSandboxId:2b5aebc34e3e30d13c8ad83c0953da1430b92550c3a93f07dbe1614c7f32ddbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730116657896834655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 612367f03a68d1db1041812e4b589683,}
,Annotations:map[string]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7cfa35836803cefed1f2501179a94791c6c3fe2677c4f39916cd5c71ba874ff,PodSandboxId:a1492f438e9cff5b73245902e5d174812c17b10e557693fee90e6af550f2530b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730116657868027953,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd1c7e453bdc5631c75728534a12051,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3a2bc8a-85a8-4275-ae4f-e14bceba9634 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.979952286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcadc9d0-0369-412f-9450-493a2c01c02d name=/runtime.v1.RuntimeService/Version
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.980046435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcadc9d0-0369-412f-9450-493a2c01c02d name=/runtime.v1.RuntimeService/Version
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.982039679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8698728f-b541-4e03-9031-474285b90005 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.982561330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116678982539325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8698728f-b541-4e03-9031-474285b90005 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.983143131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e908333c-a362-469d-941e-ce69f20314d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.983214502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e908333c-a362-469d-941e-ce69f20314d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:58 test-preload-083517 crio[660]: time="2024-10-28 11:57:58.983419508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dfcaf8af91be1b76c16056f65a72296e7ccebcec342716abca978c17bc3037b,PodSandboxId:7e94c24242a56666a687f473c3c7e9083a35234637aa23a156ce0934b7a03b56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730116671197754340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-skkbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70dd8180-76ad-4162-a4e0-0dda4601739a,},Annotations:map[string]string{io.kubernetes.container.hash: b82ab926,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095704c1378e651bbbd0f2144b8d237f30ae8e7628fc2a37a44f91e211d1bf2f,PodSandboxId:1ef5b142abd5f4006f05cf1783cb475cbeca7e8d0b6f5660a5080c8b9bc725e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730116664155516003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f8qvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: abf41645-f094-4078-a00b-100a55ed83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2fa7be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5b29d3adfc91bd965a0e39a7a416c7e10a3b2ced56d9d132401f7695aeb027,PodSandboxId:e54960d1efd048bcc33655bd1beee30f601c64604764aa542e6e828afea7dd20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116663886916123,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07
aab3fa-d57c-42c2-bd28-ddc163dc7be2,},Annotations:map[string]string{io.kubernetes.container.hash: 62f21e33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608442e75b51e3d918ef007081eeca248899b8ef365f92f51f5a21a94def56f6,PodSandboxId:d8d33b5aba13318227fc0d56e6577e8060a3185a31c96759db784cd6a780d5bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730116657980644040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bbb0fe7aa0538c49988b66aa56446ec,},Anno
tations:map[string]string{io.kubernetes.container.hash: 54b9989d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d91fdd963f2a93d11b82255cf12486fcd71945096b7337f1fb9318afdd7951,PodSandboxId:29c892f055afa0d6652c0a871a2c13fe14b0a84a35cc7b2f5fadba843ac71936,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730116657973653668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1eb687a1211312bb8d4e2
405008cb4,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379d6610b22e0b36425c41110ed9b1fb06b6368335aa6cc55f7bc13a7a451c01,PodSandboxId:2b5aebc34e3e30d13c8ad83c0953da1430b92550c3a93f07dbe1614c7f32ddbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730116657896834655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 612367f03a68d1db1041812e4b589683,}
,Annotations:map[string]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7cfa35836803cefed1f2501179a94791c6c3fe2677c4f39916cd5c71ba874ff,PodSandboxId:a1492f438e9cff5b73245902e5d174812c17b10e557693fee90e6af550f2530b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730116657868027953,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd1c7e453bdc5631c75728534a12051,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e908333c-a362-469d-941e-ce69f20314d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.023318098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97a069ed-317d-43c2-9038-ce129a1f29a0 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.023392954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97a069ed-317d-43c2-9038-ce129a1f29a0 name=/runtime.v1.RuntimeService/Version
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.024854766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dc25fd7-d140-4d32-b4a1-f846c5c5ba89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.025300779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116679025278521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dc25fd7-d140-4d32-b4a1-f846c5c5ba89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.025829423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c179715-ed04-4eec-ac83-32d44b9f5891 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.025883498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c179715-ed04-4eec-ac83-32d44b9f5891 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.026036704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dfcaf8af91be1b76c16056f65a72296e7ccebcec342716abca978c17bc3037b,PodSandboxId:7e94c24242a56666a687f473c3c7e9083a35234637aa23a156ce0934b7a03b56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730116671197754340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-skkbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70dd8180-76ad-4162-a4e0-0dda4601739a,},Annotations:map[string]string{io.kubernetes.container.hash: b82ab926,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095704c1378e651bbbd0f2144b8d237f30ae8e7628fc2a37a44f91e211d1bf2f,PodSandboxId:1ef5b142abd5f4006f05cf1783cb475cbeca7e8d0b6f5660a5080c8b9bc725e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730116664155516003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f8qvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: abf41645-f094-4078-a00b-100a55ed83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2fa7be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5b29d3adfc91bd965a0e39a7a416c7e10a3b2ced56d9d132401f7695aeb027,PodSandboxId:e54960d1efd048bcc33655bd1beee30f601c64604764aa542e6e828afea7dd20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116663886916123,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07
aab3fa-d57c-42c2-bd28-ddc163dc7be2,},Annotations:map[string]string{io.kubernetes.container.hash: 62f21e33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608442e75b51e3d918ef007081eeca248899b8ef365f92f51f5a21a94def56f6,PodSandboxId:d8d33b5aba13318227fc0d56e6577e8060a3185a31c96759db784cd6a780d5bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730116657980644040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bbb0fe7aa0538c49988b66aa56446ec,},Anno
tations:map[string]string{io.kubernetes.container.hash: 54b9989d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d91fdd963f2a93d11b82255cf12486fcd71945096b7337f1fb9318afdd7951,PodSandboxId:29c892f055afa0d6652c0a871a2c13fe14b0a84a35cc7b2f5fadba843ac71936,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730116657973653668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1eb687a1211312bb8d4e2
405008cb4,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379d6610b22e0b36425c41110ed9b1fb06b6368335aa6cc55f7bc13a7a451c01,PodSandboxId:2b5aebc34e3e30d13c8ad83c0953da1430b92550c3a93f07dbe1614c7f32ddbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730116657896834655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 612367f03a68d1db1041812e4b589683,}
,Annotations:map[string]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7cfa35836803cefed1f2501179a94791c6c3fe2677c4f39916cd5c71ba874ff,PodSandboxId:a1492f438e9cff5b73245902e5d174812c17b10e557693fee90e6af550f2530b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730116657868027953,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd1c7e453bdc5631c75728534a12051,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c179715-ed04-4eec-ac83-32d44b9f5891 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.060987572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26828f1c-5f26-4b71-af77-ebed90c6facd name=/runtime.v1.RuntimeService/Version
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.061071798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26828f1c-5f26-4b71-af77-ebed90c6facd name=/runtime.v1.RuntimeService/Version
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.064642656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1d5e36e-a33c-4a24-9bb1-2900aa543e76 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.065079872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116679065051269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1d5e36e-a33c-4a24-9bb1-2900aa543e76 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.065648816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51f0b5ea-0249-48f2-ae4e-44b9221543f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.065700782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51f0b5ea-0249-48f2-ae4e-44b9221543f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 11:57:59 test-preload-083517 crio[660]: time="2024-10-28 11:57:59.065846291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7dfcaf8af91be1b76c16056f65a72296e7ccebcec342716abca978c17bc3037b,PodSandboxId:7e94c24242a56666a687f473c3c7e9083a35234637aa23a156ce0934b7a03b56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1730116671197754340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-skkbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70dd8180-76ad-4162-a4e0-0dda4601739a,},Annotations:map[string]string{io.kubernetes.container.hash: b82ab926,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095704c1378e651bbbd0f2144b8d237f30ae8e7628fc2a37a44f91e211d1bf2f,PodSandboxId:1ef5b142abd5f4006f05cf1783cb475cbeca7e8d0b6f5660a5080c8b9bc725e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1730116664155516003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f8qvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: abf41645-f094-4078-a00b-100a55ed83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2fa7be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5b29d3adfc91bd965a0e39a7a416c7e10a3b2ced56d9d132401f7695aeb027,PodSandboxId:e54960d1efd048bcc33655bd1beee30f601c64604764aa542e6e828afea7dd20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730116663886916123,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07
aab3fa-d57c-42c2-bd28-ddc163dc7be2,},Annotations:map[string]string{io.kubernetes.container.hash: 62f21e33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608442e75b51e3d918ef007081eeca248899b8ef365f92f51f5a21a94def56f6,PodSandboxId:d8d33b5aba13318227fc0d56e6577e8060a3185a31c96759db784cd6a780d5bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1730116657980644040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bbb0fe7aa0538c49988b66aa56446ec,},Anno
tations:map[string]string{io.kubernetes.container.hash: 54b9989d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d91fdd963f2a93d11b82255cf12486fcd71945096b7337f1fb9318afdd7951,PodSandboxId:29c892f055afa0d6652c0a871a2c13fe14b0a84a35cc7b2f5fadba843ac71936,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1730116657973653668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1eb687a1211312bb8d4e2
405008cb4,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379d6610b22e0b36425c41110ed9b1fb06b6368335aa6cc55f7bc13a7a451c01,PodSandboxId:2b5aebc34e3e30d13c8ad83c0953da1430b92550c3a93f07dbe1614c7f32ddbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1730116657896834655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 612367f03a68d1db1041812e4b589683,}
,Annotations:map[string]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7cfa35836803cefed1f2501179a94791c6c3fe2677c4f39916cd5c71ba874ff,PodSandboxId:a1492f438e9cff5b73245902e5d174812c17b10e557693fee90e6af550f2530b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1730116657868027953,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-083517,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd1c7e453bdc5631c75728534a12051,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51f0b5ea-0249-48f2-ae4e-44b9221543f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7dfcaf8af91be       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   7e94c24242a56       coredns-6d4b75cb6d-skkbc
	095704c1378e6       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   1ef5b142abd5f       kube-proxy-f8qvv
	0a5b29d3adfc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   e54960d1efd04       storage-provisioner
	608442e75b51e       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   d8d33b5aba133       etcd-test-preload-083517
	e0d91fdd963f2       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   29c892f055afa       kube-controller-manager-test-preload-083517
	379d6610b22e0       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   2b5aebc34e3e3       kube-apiserver-test-preload-083517
	e7cfa35836803       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   a1492f438e9cf       kube-scheduler-test-preload-083517
	
	
	==> coredns [7dfcaf8af91be1b76c16056f65a72296e7ccebcec342716abca978c17bc3037b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40996 - 10023 "HINFO IN 5362012801043999829.5470940733714860342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01890843s
	
	
	==> describe nodes <==
	Name:               test-preload-083517
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-083517
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=test-preload-083517
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T11_56_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 11:56:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-083517
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 11:57:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 11:57:53 +0000   Mon, 28 Oct 2024 11:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 11:57:53 +0000   Mon, 28 Oct 2024 11:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 11:57:53 +0000   Mon, 28 Oct 2024 11:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 11:57:53 +0000   Mon, 28 Oct 2024 11:57:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    test-preload-083517
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2cd7471e94584fa28bc150acd8606f2c
	  System UUID:                2cd7471e-9458-4fa2-8bc1-50acd8606f2c
	  Boot ID:                    506bf00d-222a-4e7b-923a-2d5757a071c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-skkbc                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     89s
	  kube-system                 etcd-test-preload-083517                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         102s
	  kube-system                 kube-apiserver-test-preload-083517             250m (12%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-test-preload-083517    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-f8qvv                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-test-preload-083517             100m (5%)     0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  110s (x5 over 110s)  kubelet          Node test-preload-083517 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x5 over 110s)  kubelet          Node test-preload-083517 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x4 over 110s)  kubelet          Node test-preload-083517 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node test-preload-083517 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node test-preload-083517 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node test-preload-083517 status is now: NodeHasSufficientPID
	  Normal  NodeReady                92s                  kubelet          Node test-preload-083517 status is now: NodeReady
	  Normal  RegisteredNode           90s                  node-controller  Node test-preload-083517 event: Registered Node test-preload-083517 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-083517 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-083517 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-083517 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-083517 event: Registered Node test-preload-083517 in Controller
	
	
	==> dmesg <==
	[Oct28 11:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053000] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041935] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.911472] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.978382] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.599219] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.454812] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.060709] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057431] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.209658] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.117780] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.287480] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[ +12.866118] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.061151] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.109924] systemd-fstab-generator[1109]: Ignoring "noauto" option for root device
	[  +5.133472] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.522496] systemd-fstab-generator[1744]: Ignoring "noauto" option for root device
	[  +5.450744] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.050529] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [608442e75b51e3d918ef007081eeca248899b8ef365f92f51f5a21a94def56f6] <==
	{"level":"info","ts":"2024-10-28T11:57:38.391Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f4acae94ef986412","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-28T11:57:38.395Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-28T11:57:38.398Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T11:57:38.401Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T11:57:38.401Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T11:57:38.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 switched to configuration voters=(17630658595946783762)"}
	{"level":"info","ts":"2024-10-28T11:57:38.404Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","added-peer-id":"f4acae94ef986412","added-peer-peer-urls":["https://192.168.39.230:2380"]}
	{"level":"info","ts":"2024-10-28T11:57:38.403Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-10-28T11:57:38.405Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2024-10-28T11:57:38.405Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:57:38.405Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T11:57:40.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2024-10-28T11:57:40.245Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:test-preload-083517 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T11:57:40.246Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:57:40.246Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T11:57:40.247Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2024-10-28T11:57:40.248Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T11:57:40.248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T11:57:40.248Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:57:59 up 0 min,  0 users,  load average: 1.02, 0.31, 0.11
	Linux test-preload-083517 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [379d6610b22e0b36425c41110ed9b1fb06b6368335aa6cc55f7bc13a7a451c01] <==
	I1028 11:57:42.669007       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1028 11:57:42.651806       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1028 11:57:42.679727       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1028 11:57:42.707337       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1028 11:57:42.679766       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1028 11:57:42.679776       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1028 11:57:42.783196       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1028 11:57:42.791086       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 11:57:42.793234       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1028 11:57:42.807776       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1028 11:57:42.807895       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1028 11:57:42.808293       1 cache.go:39] Caches are synced for autoregister controller
	I1028 11:57:42.808439       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1028 11:57:42.814018       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1028 11:57:42.881122       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 11:57:43.320194       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1028 11:57:43.663556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 11:57:44.435808       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1028 11:57:44.453920       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1028 11:57:44.502645       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1028 11:57:44.522148       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 11:57:44.534947       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 11:57:44.597097       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1028 11:57:55.903457       1 controller.go:611] quota admission added evaluator for: endpoints
	I1028 11:57:55.966057       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e0d91fdd963f2a93d11b82255cf12486fcd71945096b7337f1fb9318afdd7951] <==
	I1028 11:57:55.927386       1 range_allocator.go:173] Starting range CIDR allocator
	I1028 11:57:55.927392       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1028 11:57:55.927400       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1028 11:57:55.936659       1 shared_informer.go:262] Caches are synced for persistent volume
	I1028 11:57:55.955260       1 shared_informer.go:262] Caches are synced for crt configmap
	I1028 11:57:55.955370       1 shared_informer.go:262] Caches are synced for PVC protection
	I1028 11:57:55.955435       1 shared_informer.go:262] Caches are synced for job
	I1028 11:57:55.955556       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1028 11:57:55.960039       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1028 11:57:55.961737       1 shared_informer.go:262] Caches are synced for deployment
	I1028 11:57:56.104502       1 shared_informer.go:262] Caches are synced for daemon sets
	I1028 11:57:56.107010       1 shared_informer.go:262] Caches are synced for taint
	I1028 11:57:56.107085       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1028 11:57:56.107205       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-083517. Assuming now as a timestamp.
	I1028 11:57:56.107252       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1028 11:57:56.107276       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1028 11:57:56.107496       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1028 11:57:56.107847       1 event.go:294] "Event occurred" object="test-preload-083517" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-083517 event: Registered Node test-preload-083517 in Controller"
	I1028 11:57:56.120498       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 11:57:56.125692       1 shared_informer.go:262] Caches are synced for disruption
	I1028 11:57:56.125764       1 disruption.go:371] Sending events to api server.
	I1028 11:57:56.136496       1 shared_informer.go:262] Caches are synced for resource quota
	I1028 11:57:56.574254       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 11:57:56.602807       1 shared_informer.go:262] Caches are synced for garbage collector
	I1028 11:57:56.602846       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [095704c1378e651bbbd0f2144b8d237f30ae8e7628fc2a37a44f91e211d1bf2f] <==
	I1028 11:57:44.519463       1 node.go:163] Successfully retrieved node IP: 192.168.39.230
	I1028 11:57:44.519776       1 server_others.go:138] "Detected node IP" address="192.168.39.230"
	I1028 11:57:44.519917       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1028 11:57:44.590941       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1028 11:57:44.590973       1 server_others.go:206] "Using iptables Proxier"
	I1028 11:57:44.591025       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1028 11:57:44.591677       1 server.go:661] "Version info" version="v1.24.4"
	I1028 11:57:44.591740       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:57:44.592983       1 config.go:226] "Starting endpoint slice config controller"
	I1028 11:57:44.593035       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1028 11:57:44.593068       1 config.go:317] "Starting service config controller"
	I1028 11:57:44.593084       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1028 11:57:44.593959       1 config.go:444] "Starting node config controller"
	I1028 11:57:44.593986       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1028 11:57:44.693565       1 shared_informer.go:262] Caches are synced for service config
	I1028 11:57:44.693761       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1028 11:57:44.694189       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [e7cfa35836803cefed1f2501179a94791c6c3fe2677c4f39916cd5c71ba874ff] <==
	I1028 11:57:38.753403       1 serving.go:348] Generated self-signed cert in-memory
	W1028 11:57:42.752772       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 11:57:42.753103       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 11:57:42.753219       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 11:57:42.753249       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 11:57:42.799969       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1028 11:57:42.800115       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 11:57:42.805497       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 11:57:42.805748       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 11:57:42.807731       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1028 11:57:42.808162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1028 11:57:42.906974       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 11:57:42 test-preload-083517 kubelet[1116]: I1028 11:57:42.849839    1116 setters.go:532] "Node became not ready" node="test-preload-083517" condition={Type:Ready Status:False LastHeartbeatTime:2024-10-28 11:57:42.849796548 +0000 UTC m=+5.877395230 LastTransitionTime:2024-10-28 11:57:42.849796548 +0000 UTC m=+5.877395230 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.150796    1116 apiserver.go:52] "Watching apiserver"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.158216    1116 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.158351    1116 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.158390    1116 topology_manager.go:200] "Topology Admit Handler"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: E1028 11:57:43.161319    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-skkbc" podUID=70dd8180-76ad-4162-a4e0-0dda4601739a
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209390    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume\") pod \"coredns-6d4b75cb6d-skkbc\" (UID: \"70dd8180-76ad-4162-a4e0-0dda4601739a\") " pod="kube-system/coredns-6d4b75cb6d-skkbc"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209503    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm5ft\" (UniqueName: \"kubernetes.io/projected/70dd8180-76ad-4162-a4e0-0dda4601739a-kube-api-access-mm5ft\") pod \"coredns-6d4b75cb6d-skkbc\" (UID: \"70dd8180-76ad-4162-a4e0-0dda4601739a\") " pod="kube-system/coredns-6d4b75cb6d-skkbc"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209570    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcj8p\" (UniqueName: \"kubernetes.io/projected/abf41645-f094-4078-a00b-100a55ed83d8-kube-api-access-lcj8p\") pod \"kube-proxy-f8qvv\" (UID: \"abf41645-f094-4078-a00b-100a55ed83d8\") " pod="kube-system/kube-proxy-f8qvv"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209668    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9jr\" (UniqueName: \"kubernetes.io/projected/07aab3fa-d57c-42c2-bd28-ddc163dc7be2-kube-api-access-gk9jr\") pod \"storage-provisioner\" (UID: \"07aab3fa-d57c-42c2-bd28-ddc163dc7be2\") " pod="kube-system/storage-provisioner"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209744    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf41645-f094-4078-a00b-100a55ed83d8-xtables-lock\") pod \"kube-proxy-f8qvv\" (UID: \"abf41645-f094-4078-a00b-100a55ed83d8\") " pod="kube-system/kube-proxy-f8qvv"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209763    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/07aab3fa-d57c-42c2-bd28-ddc163dc7be2-tmp\") pod \"storage-provisioner\" (UID: \"07aab3fa-d57c-42c2-bd28-ddc163dc7be2\") " pod="kube-system/storage-provisioner"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209873    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/abf41645-f094-4078-a00b-100a55ed83d8-kube-proxy\") pod \"kube-proxy-f8qvv\" (UID: \"abf41645-f094-4078-a00b-100a55ed83d8\") " pod="kube-system/kube-proxy-f8qvv"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209890    1116 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf41645-f094-4078-a00b-100a55ed83d8-lib-modules\") pod \"kube-proxy-f8qvv\" (UID: \"abf41645-f094-4078-a00b-100a55ed83d8\") " pod="kube-system/kube-proxy-f8qvv"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: I1028 11:57:43.209959    1116 reconciler.go:159] "Reconciler: start to sync state"
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: E1028 11:57:43.315431    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: E1028 11:57:43.315560    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume podName:70dd8180-76ad-4162-a4e0-0dda4601739a nodeName:}" failed. No retries permitted until 2024-10-28 11:57:43.815515001 +0000 UTC m=+6.843113699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume") pod "coredns-6d4b75cb6d-skkbc" (UID: "70dd8180-76ad-4162-a4e0-0dda4601739a") : object "kube-system"/"coredns" not registered
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: E1028 11:57:43.819203    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 11:57:43 test-preload-083517 kubelet[1116]: E1028 11:57:43.819272    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume podName:70dd8180-76ad-4162-a4e0-0dda4601739a nodeName:}" failed. No retries permitted until 2024-10-28 11:57:44.819257863 +0000 UTC m=+7.846856558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume") pod "coredns-6d4b75cb6d-skkbc" (UID: "70dd8180-76ad-4162-a4e0-0dda4601739a") : object "kube-system"/"coredns" not registered
	Oct 28 11:57:44 test-preload-083517 kubelet[1116]: E1028 11:57:44.260452    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-skkbc" podUID=70dd8180-76ad-4162-a4e0-0dda4601739a
	Oct 28 11:57:44 test-preload-083517 kubelet[1116]: E1028 11:57:44.827322    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 11:57:44 test-preload-083517 kubelet[1116]: E1028 11:57:44.827466    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume podName:70dd8180-76ad-4162-a4e0-0dda4601739a nodeName:}" failed. No retries permitted until 2024-10-28 11:57:46.82745104 +0000 UTC m=+9.855049723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume") pod "coredns-6d4b75cb6d-skkbc" (UID: "70dd8180-76ad-4162-a4e0-0dda4601739a") : object "kube-system"/"coredns" not registered
	Oct 28 11:57:46 test-preload-083517 kubelet[1116]: E1028 11:57:46.260316    1116 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-skkbc" podUID=70dd8180-76ad-4162-a4e0-0dda4601739a
	Oct 28 11:57:46 test-preload-083517 kubelet[1116]: E1028 11:57:46.844924    1116 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 28 11:57:46 test-preload-083517 kubelet[1116]: E1028 11:57:46.845059    1116 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume podName:70dd8180-76ad-4162-a4e0-0dda4601739a nodeName:}" failed. No retries permitted until 2024-10-28 11:57:50.845040042 +0000 UTC m=+13.872638740 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70dd8180-76ad-4162-a4e0-0dda4601739a-config-volume") pod "coredns-6d4b75cb6d-skkbc" (UID: "70dd8180-76ad-4162-a4e0-0dda4601739a") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [0a5b29d3adfc91bd965a0e39a7a416c7e10a3b2ced56d9d132401f7695aeb027] <==
	I1028 11:57:44.018194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-083517 -n test-preload-083517
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-083517 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-083517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-083517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-083517: (1.126277627s)
--- FAIL: TestPreload (181.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m36.597408957s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-337849] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-337849" primary control-plane node in "kubernetes-upgrade-337849" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:02:51.740835  179606 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:02:51.740977  179606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:51.740991  179606 out.go:358] Setting ErrFile to fd 2...
	I1028 12:02:51.740997  179606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:51.741280  179606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:02:51.742078  179606 out.go:352] Setting JSON to false
	I1028 12:02:51.743512  179606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6315,"bootTime":1730110657,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:02:51.743637  179606 start.go:139] virtualization: kvm guest
	I1028 12:02:51.746039  179606 out.go:177] * [kubernetes-upgrade-337849] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:02:51.748098  179606 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:02:51.748108  179606 notify.go:220] Checking for updates...
	I1028 12:02:51.751103  179606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:02:51.752731  179606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:02:51.754442  179606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:02:51.755911  179606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:02:51.757560  179606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:02:51.759554  179606 config.go:182] Loaded profile config "NoKubernetes-606176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:51.759653  179606 config.go:182] Loaded profile config "running-upgrade-628680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:02:51.759728  179606 config.go:182] Loaded profile config "stopped-upgrade-755815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:02:51.759814  179606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:02:51.800070  179606 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:02:51.801772  179606 start.go:297] selected driver: kvm2
	I1028 12:02:51.801797  179606 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:02:51.801825  179606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:02:51.802922  179606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:51.803023  179606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:02:51.819970  179606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:02:51.820021  179606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 12:02:51.820268  179606 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 12:02:51.820292  179606 cni.go:84] Creating CNI manager for ""
	I1028 12:02:51.820362  179606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:02:51.820374  179606 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:02:51.820419  179606 start.go:340] cluster config:
	{Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:51.820505  179606 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:51.822292  179606 out.go:177] * Starting "kubernetes-upgrade-337849" primary control-plane node in "kubernetes-upgrade-337849" cluster
	I1028 12:02:51.823941  179606 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:02:51.824008  179606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 12:02:51.824024  179606 cache.go:56] Caching tarball of preloaded images
	I1028 12:02:51.824146  179606 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:02:51.824161  179606 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 12:02:51.824253  179606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/config.json ...
	I1028 12:02:51.824276  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/config.json: {Name:mk37f038a213337b0a80f563412944f30367f993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:51.824469  179606 start.go:360] acquireMachinesLock for kubernetes-upgrade-337849: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:02:55.783679  179606 start.go:364] duration metric: took 3.959148071s to acquireMachinesLock for "kubernetes-upgrade-337849"
	I1028 12:02:55.783736  179606 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:02:55.783852  179606 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 12:02:55.786065  179606 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 12:02:55.786283  179606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:55.786342  179606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:55.808500  179606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I1028 12:02:55.808956  179606 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:55.809640  179606 main.go:141] libmachine: Using API Version  1
	I1028 12:02:55.809675  179606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:55.810347  179606 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:55.810709  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:02:55.810894  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:02:55.811102  179606 start.go:159] libmachine.API.Create for "kubernetes-upgrade-337849" (driver="kvm2")
	I1028 12:02:55.811131  179606 client.go:168] LocalClient.Create starting
	I1028 12:02:55.811166  179606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 12:02:55.811204  179606 main.go:141] libmachine: Decoding PEM data...
	I1028 12:02:55.811225  179606 main.go:141] libmachine: Parsing certificate...
	I1028 12:02:55.811286  179606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 12:02:55.811314  179606 main.go:141] libmachine: Decoding PEM data...
	I1028 12:02:55.811327  179606 main.go:141] libmachine: Parsing certificate...
	I1028 12:02:55.811356  179606 main.go:141] libmachine: Running pre-create checks...
	I1028 12:02:55.811366  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .PreCreateCheck
	I1028 12:02:55.811968  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetConfigRaw
	I1028 12:02:55.812452  179606 main.go:141] libmachine: Creating machine...
	I1028 12:02:55.812466  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .Create
	I1028 12:02:55.812584  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Creating KVM machine...
	I1028 12:02:55.814474  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found existing default KVM network
	I1028 12:02:55.816296  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:55.815958  179755 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:64:2d} reservation:<nil>}
	I1028 12:02:55.817857  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:55.817767  179755 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015fb0}
	I1028 12:02:55.817879  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | created network xml: 
	I1028 12:02:55.817890  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | <network>
	I1028 12:02:55.817912  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |   <name>mk-kubernetes-upgrade-337849</name>
	I1028 12:02:55.817923  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |   <dns enable='no'/>
	I1028 12:02:55.817929  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |   
	I1028 12:02:55.817935  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1028 12:02:55.817945  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |     <dhcp>
	I1028 12:02:55.818283  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1028 12:02:55.818319  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |     </dhcp>
	I1028 12:02:55.818326  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |   </ip>
	I1028 12:02:55.818333  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG |   
	I1028 12:02:55.818339  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | </network>
	I1028 12:02:55.818348  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | 
	I1028 12:02:55.824604  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | trying to create private KVM network mk-kubernetes-upgrade-337849 192.168.50.0/24...
	I1028 12:02:55.919112  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849 ...
	I1028 12:02:55.919141  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 12:02:55.919151  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | private KVM network mk-kubernetes-upgrade-337849 192.168.50.0/24 created
	I1028 12:02:55.919168  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:55.917957  179755 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:02:55.919187  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 12:02:56.245674  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:56.241963  179755 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa...
	I1028 12:02:56.460004  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:56.459787  179755 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/kubernetes-upgrade-337849.rawdisk...
	I1028 12:02:56.460058  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Writing magic tar header
	I1028 12:02:56.460079  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Writing SSH key tar header
	I1028 12:02:56.460093  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:56.459969  179755 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849 ...
	I1028 12:02:56.460114  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849
	I1028 12:02:56.460143  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 12:02:56.460154  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:02:56.460174  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 12:02:56.460185  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 12:02:56.460200  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home/jenkins
	I1028 12:02:56.460208  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Checking permissions on dir: /home
	I1028 12:02:56.460221  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Skipping /home - not owner
	I1028 12:02:56.460239  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849 (perms=drwx------)
	I1028 12:02:56.460251  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 12:02:56.460263  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 12:02:56.460274  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 12:02:56.460285  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 12:02:56.460294  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 12:02:56.460303  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Creating domain...
	I1028 12:02:56.463160  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) define libvirt domain using xml: 
	I1028 12:02:56.463190  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) <domain type='kvm'>
	I1028 12:02:56.463203  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <name>kubernetes-upgrade-337849</name>
	I1028 12:02:56.463217  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <memory unit='MiB'>2200</memory>
	I1028 12:02:56.463229  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <vcpu>2</vcpu>
	I1028 12:02:56.463242  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <features>
	I1028 12:02:56.463254  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <acpi/>
	I1028 12:02:56.463267  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <apic/>
	I1028 12:02:56.463277  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <pae/>
	I1028 12:02:56.463284  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     
	I1028 12:02:56.463361  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   </features>
	I1028 12:02:56.463391  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <cpu mode='host-passthrough'>
	I1028 12:02:56.463402  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   
	I1028 12:02:56.463409  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   </cpu>
	I1028 12:02:56.463431  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <os>
	I1028 12:02:56.463443  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <type>hvm</type>
	I1028 12:02:56.463453  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <boot dev='cdrom'/>
	I1028 12:02:56.463460  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <boot dev='hd'/>
	I1028 12:02:56.463469  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <bootmenu enable='no'/>
	I1028 12:02:56.463476  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   </os>
	I1028 12:02:56.463484  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   <devices>
	I1028 12:02:56.463491  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <disk type='file' device='cdrom'>
	I1028 12:02:56.463505  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/boot2docker.iso'/>
	I1028 12:02:56.463515  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <target dev='hdc' bus='scsi'/>
	I1028 12:02:56.463523  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <readonly/>
	I1028 12:02:56.463533  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </disk>
	I1028 12:02:56.463543  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <disk type='file' device='disk'>
	I1028 12:02:56.463564  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 12:02:56.463579  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/kubernetes-upgrade-337849.rawdisk'/>
	I1028 12:02:56.463594  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <target dev='hda' bus='virtio'/>
	I1028 12:02:56.463603  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </disk>
	I1028 12:02:56.463611  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <interface type='network'>
	I1028 12:02:56.463621  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <source network='mk-kubernetes-upgrade-337849'/>
	I1028 12:02:56.463633  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <model type='virtio'/>
	I1028 12:02:56.463642  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </interface>
	I1028 12:02:56.463654  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <interface type='network'>
	I1028 12:02:56.463667  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <source network='default'/>
	I1028 12:02:56.463679  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <model type='virtio'/>
	I1028 12:02:56.463689  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </interface>
	I1028 12:02:56.463699  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <serial type='pty'>
	I1028 12:02:56.463709  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <target port='0'/>
	I1028 12:02:56.463716  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </serial>
	I1028 12:02:56.463738  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <console type='pty'>
	I1028 12:02:56.463746  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <target type='serial' port='0'/>
	I1028 12:02:56.463754  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </console>
	I1028 12:02:56.463761  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     <rng model='virtio'>
	I1028 12:02:56.463771  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)       <backend model='random'>/dev/random</backend>
	I1028 12:02:56.463778  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     </rng>
	I1028 12:02:56.463787  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     
	I1028 12:02:56.463799  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)     
	I1028 12:02:56.463819  179606 main.go:141] libmachine: (kubernetes-upgrade-337849)   </devices>
	I1028 12:02:56.463831  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) </domain>
	I1028 12:02:56.463847  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) 
	I1028 12:02:56.468476  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:7f:33:de in network default
	I1028 12:02:56.469235  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Ensuring networks are active...
	I1028 12:02:56.469263  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:02:56.470213  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Ensuring network default is active
	I1028 12:02:56.470680  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Ensuring network mk-kubernetes-upgrade-337849 is active
	I1028 12:02:56.471563  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Getting domain xml...
	I1028 12:02:56.472792  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Creating domain...
	I1028 12:02:58.336106  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Waiting to get IP...
	I1028 12:02:58.337199  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:02:58.337782  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:02:58.337833  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:58.337768  179755 retry.go:31] will retry after 278.739846ms: waiting for machine to come up
	I1028 12:02:58.618503  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:02:58.619034  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:02:58.619062  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:58.619002  179755 retry.go:31] will retry after 324.997409ms: waiting for machine to come up
	I1028 12:02:58.945948  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:02:58.946599  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:02:58.946643  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:58.946539  179755 retry.go:31] will retry after 385.338344ms: waiting for machine to come up
	I1028 12:02:59.332891  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:02:59.333518  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:02:59.333573  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:59.333444  179755 retry.go:31] will retry after 401.812425ms: waiting for machine to come up
	I1028 12:02:59.737364  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:02:59.737971  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:02:59.737997  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:02:59.737916  179755 retry.go:31] will retry after 605.438376ms: waiting for machine to come up
	I1028 12:03:00.344854  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:00.345624  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:00.345666  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:00.345557  179755 retry.go:31] will retry after 584.143917ms: waiting for machine to come up
	I1028 12:03:00.931753  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:00.932166  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:00.932190  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:00.932107  179755 retry.go:31] will retry after 813.70725ms: waiting for machine to come up
	I1028 12:03:01.748038  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:01.748645  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:01.748674  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:01.748574  179755 retry.go:31] will retry after 1.036128438s: waiting for machine to come up
	I1028 12:03:02.787866  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:02.788588  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:02.788617  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:02.788467  179755 retry.go:31] will retry after 1.666626907s: waiting for machine to come up
	I1028 12:03:04.457284  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:04.457726  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:04.457785  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:04.457696  179755 retry.go:31] will retry after 1.832863001s: waiting for machine to come up
	I1028 12:03:06.291807  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:06.292346  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:06.292365  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:06.292261  179755 retry.go:31] will retry after 2.600780883s: waiting for machine to come up
	I1028 12:03:08.896038  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:08.896672  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:08.896702  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:08.896627  179755 retry.go:31] will retry after 3.359021694s: waiting for machine to come up
	I1028 12:03:12.258379  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:12.258848  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:12.258879  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:12.258800  179755 retry.go:31] will retry after 4.357961329s: waiting for machine to come up
	I1028 12:03:16.618955  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:16.619520  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find current IP address of domain kubernetes-upgrade-337849 in network mk-kubernetes-upgrade-337849
	I1028 12:03:16.619550  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | I1028 12:03:16.619448  179755 retry.go:31] will retry after 3.43264266s: waiting for machine to come up
	I1028 12:03:20.055094  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.055692  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Found IP for machine: 192.168.50.142
	I1028 12:03:20.055711  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Reserving static IP address...
	I1028 12:03:20.055731  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has current primary IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.056214  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-337849", mac: "52:54:00:a5:94:dd", ip: "192.168.50.142"} in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.147802  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Getting to WaitForSSH function...
	I1028 12:03:20.147836  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Reserved static IP address: 192.168.50.142
	I1028 12:03:20.147849  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Waiting for SSH to be available...
	I1028 12:03:20.150676  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.151085  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.151111  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.151225  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Using SSH client type: external
	I1028 12:03:20.151248  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa (-rw-------)
	I1028 12:03:20.151278  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:03:20.151295  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | About to run SSH command:
	I1028 12:03:20.151307  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | exit 0
	I1028 12:03:20.286076  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | SSH cmd err, output: <nil>: 
	I1028 12:03:20.286471  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) KVM machine creation complete!
	I1028 12:03:20.286836  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetConfigRaw
	I1028 12:03:20.287568  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:20.287769  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:20.287940  179606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:03:20.287958  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetState
	I1028 12:03:20.289488  179606 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:03:20.289518  179606 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:03:20.289545  179606 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:03:20.289558  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:20.292217  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.292670  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.292703  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.292802  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:20.292980  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.293118  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.293281  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:20.293475  179606 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:20.293764  179606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:03:20.293778  179606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:03:20.405545  179606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:03:20.405573  179606 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:03:20.405600  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:20.408998  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.409316  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.409347  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.409542  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:20.409767  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.409960  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.410117  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:20.410279  179606 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:20.410438  179606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:03:20.410449  179606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:03:20.524515  179606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:03:20.524610  179606 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:03:20.524625  179606 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:03:20.524636  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:03:20.524936  179606 buildroot.go:166] provisioning hostname "kubernetes-upgrade-337849"
	I1028 12:03:20.524966  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:03:20.525176  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:20.528303  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.528712  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.528741  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.528925  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:20.529182  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.529374  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.529580  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:20.529780  179606 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:20.530025  179606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:03:20.530043  179606 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-337849 && echo "kubernetes-upgrade-337849" | sudo tee /etc/hostname
	I1028 12:03:20.656714  179606 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-337849
	
	I1028 12:03:20.656752  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:20.659931  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.660327  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.660359  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.660516  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:20.660722  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.660938  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:20.661106  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:20.661288  179606 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:20.661490  179606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:03:20.661510  179606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-337849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-337849/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-337849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:03:20.784965  179606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:03:20.785004  179606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:03:20.785027  179606 buildroot.go:174] setting up certificates
	I1028 12:03:20.785040  179606 provision.go:84] configureAuth start
	I1028 12:03:20.785053  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:03:20.785356  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:03:20.788699  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.789065  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.789102  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.789229  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:20.791938  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.792257  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:20.792287  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:20.792486  179606 provision.go:143] copyHostCerts
	I1028 12:03:20.792578  179606 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:03:20.792604  179606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:03:20.792679  179606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:03:20.792828  179606 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:03:20.792846  179606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:03:20.792880  179606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:03:20.792983  179606 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:03:20.792995  179606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:03:20.793023  179606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:03:20.793099  179606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-337849 san=[127.0.0.1 192.168.50.142 kubernetes-upgrade-337849 localhost minikube]
	I1028 12:03:21.039609  179606 provision.go:177] copyRemoteCerts
	I1028 12:03:21.039688  179606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:03:21.039721  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:21.043336  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.043736  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.043760  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.044134  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:21.044373  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.044532  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:21.044687  179606 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:03:21.132556  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:03:21.158372  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1028 12:03:21.183148  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:03:21.209866  179606 provision.go:87] duration metric: took 424.808941ms to configureAuth
	I1028 12:03:21.209906  179606 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:03:21.210122  179606 config.go:182] Loaded profile config "kubernetes-upgrade-337849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:03:21.210217  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:21.213658  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.214098  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.214133  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.214492  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:21.214753  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.214943  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.215105  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:21.215310  179606 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:21.215539  179606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:03:21.215561  179606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:03:21.501994  179606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:03:21.502047  179606 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:03:21.502062  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetURL
	I1028 12:03:21.503545  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | Using libvirt version 6000000
	I1028 12:03:21.505949  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.506336  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.506381  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.506559  179606 main.go:141] libmachine: Docker is up and running!
	I1028 12:03:21.506611  179606 main.go:141] libmachine: Reticulating splines...
	I1028 12:03:21.506624  179606 client.go:171] duration metric: took 25.69548333s to LocalClient.Create
	I1028 12:03:21.506647  179606 start.go:167] duration metric: took 25.695555369s to libmachine.API.Create "kubernetes-upgrade-337849"
	I1028 12:03:21.506660  179606 start.go:293] postStartSetup for "kubernetes-upgrade-337849" (driver="kvm2")
	I1028 12:03:21.506676  179606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:03:21.506700  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:21.506933  179606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:03:21.506954  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:21.509227  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.509553  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.509586  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.509774  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:21.509974  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.510142  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:21.510318  179606 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:03:21.600390  179606 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:03:21.605612  179606 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:03:21.605637  179606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:03:21.605705  179606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:03:21.605799  179606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:03:21.605910  179606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:03:21.619438  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:03:21.652954  179606 start.go:296] duration metric: took 146.276388ms for postStartSetup
	I1028 12:03:21.653022  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetConfigRaw
	I1028 12:03:21.654225  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:03:21.657004  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.657387  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.657416  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.657676  179606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/config.json ...
	I1028 12:03:21.658446  179606 start.go:128] duration metric: took 25.874572262s to createHost
	I1028 12:03:21.658477  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:21.660695  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.661025  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.661053  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.661181  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:21.661369  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.661548  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.661684  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:21.661837  179606 main.go:141] libmachine: Using SSH client type: native
	I1028 12:03:21.662058  179606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:03:21.662071  179606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:03:21.777563  179606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117001.757567516
	
	I1028 12:03:21.777584  179606 fix.go:216] guest clock: 1730117001.757567516
	I1028 12:03:21.777594  179606 fix.go:229] Guest: 2024-10-28 12:03:21.757567516 +0000 UTC Remote: 2024-10-28 12:03:21.658463918 +0000 UTC m=+29.958907298 (delta=99.103598ms)
	I1028 12:03:21.777629  179606 fix.go:200] guest clock delta is within tolerance: 99.103598ms
	I1028 12:03:21.777637  179606 start.go:83] releasing machines lock for "kubernetes-upgrade-337849", held for 25.993934971s
	I1028 12:03:21.777667  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:21.777991  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:03:21.782014  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.782462  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.782493  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.782695  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:21.783238  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:21.783491  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:03:21.783575  179606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:03:21.783621  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:21.783813  179606 ssh_runner.go:195] Run: cat /version.json
	I1028 12:03:21.783838  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:03:21.793581  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.793620  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.794104  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.794136  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.794164  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:21.794177  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:21.794354  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:21.794504  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:03:21.794595  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.794683  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:03:21.794741  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:21.794931  179606 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:03:21.794943  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:03:21.795118  179606 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:03:21.876067  179606 ssh_runner.go:195] Run: systemctl --version
	I1028 12:03:21.902893  179606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:03:22.086041  179606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:03:22.093617  179606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:03:22.093682  179606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:03:22.114729  179606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:03:22.114758  179606 start.go:495] detecting cgroup driver to use...
	I1028 12:03:22.114832  179606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:03:22.135440  179606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:03:22.151708  179606 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:03:22.151783  179606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:03:22.170789  179606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:03:22.188861  179606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:03:22.351719  179606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:03:22.530964  179606 docker.go:233] disabling docker service ...
	I1028 12:03:22.531046  179606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:03:22.549953  179606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:03:22.565675  179606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:03:22.714656  179606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:03:22.872174  179606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:03:22.893648  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:03:22.919303  179606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:03:22.919370  179606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:22.931386  179606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:03:22.931453  179606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:22.944515  179606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:22.956321  179606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:03:22.967881  179606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:03:22.980055  179606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:03:22.990565  179606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:03:22.990639  179606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:03:23.005558  179606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:03:23.017101  179606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:03:23.151228  179606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:03:23.267651  179606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:03:23.267726  179606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:03:23.273866  179606 start.go:563] Will wait 60s for crictl version
	I1028 12:03:23.273919  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:23.278369  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:03:23.328708  179606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:03:23.328798  179606 ssh_runner.go:195] Run: crio --version
	I1028 12:03:23.360849  179606 ssh_runner.go:195] Run: crio --version
	I1028 12:03:23.392258  179606 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:03:23.393631  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:03:23.595393  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:23.595738  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:03:13 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:03:23.595817  179606 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:03:23.596123  179606 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:03:23.601831  179606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:03:23.618895  179606 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:03:23.619021  179606 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:03:23.619076  179606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:03:23.656529  179606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:03:23.656604  179606 ssh_runner.go:195] Run: which lz4
	I1028 12:03:23.661080  179606 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:03:23.665578  179606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:03:23.665611  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:03:25.426135  179606 crio.go:462] duration metric: took 1.765081897s to copy over tarball
	I1028 12:03:25.426214  179606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:03:28.080691  179606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.654440648s)
	I1028 12:03:28.080721  179606 crio.go:469] duration metric: took 2.654555359s to extract the tarball
	I1028 12:03:28.080731  179606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:03:28.134841  179606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:03:28.188364  179606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:03:28.188396  179606 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:03:28.188484  179606 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:03:28.188483  179606 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.188500  179606 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.188508  179606 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:28.188522  179606 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:28.188520  179606 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:03:28.188513  179606 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:03:28.188526  179606 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:28.190112  179606 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:28.190399  179606 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.190409  179606 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:03:28.190470  179606 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.190706  179606 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:28.190865  179606 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:28.190887  179606 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:03:28.191009  179606 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:03:28.376200  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:03:28.442430  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.450713  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:28.450745  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.462462  179606 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:03:28.462504  179606 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:03:28.462541  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.472731  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:03:28.490694  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:28.501747  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:28.595626  179606 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:03:28.595692  179606 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.595753  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.608055  179606 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:03:28.608107  179606 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.608149  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.638035  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:03:28.638031  179606 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:03:28.638157  179606 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:28.638185  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.669107  179606 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:03:28.669155  179606 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:03:28.669201  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.680538  179606 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:03:28.680587  179606 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:28.680594  179606 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:03:28.680630  179606 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:28.680673  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.680679  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.680632  179606 ssh_runner.go:195] Run: which crictl
	I1028 12:03:28.680770  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.744799  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:28.744881  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:03:28.744914  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:03:28.790156  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.790174  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:28.790267  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.790174  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:28.934527  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:28.952174  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:03:28.952187  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:03:28.968103  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:03:28.968128  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:03:28.968144  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:28.968213  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:29.048282  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:03:29.107952  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:03:29.130302  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:03:29.130308  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:03:29.130376  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:03:29.130924  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:03:29.131349  179606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:03:29.206189  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:03:29.231921  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:03:29.234058  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:03:29.234071  179606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:03:29.340459  179606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:03:29.485683  179606 cache_images.go:92] duration metric: took 1.29726721s to LoadCachedImages
	W1028 12:03:29.485798  179606 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1028 12:03:29.485815  179606 kubeadm.go:934] updating node { 192.168.50.142 8443 v1.20.0 crio true true} ...
	I1028 12:03:29.485916  179606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-337849 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:03:29.486000  179606 ssh_runner.go:195] Run: crio config
	I1028 12:03:29.538002  179606 cni.go:84] Creating CNI manager for ""
	I1028 12:03:29.538025  179606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:03:29.538035  179606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:03:29.538053  179606 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.142 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-337849 NodeName:kubernetes-upgrade-337849 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:03:29.538195  179606 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-337849"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:03:29.538275  179606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:03:29.551638  179606 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:03:29.551727  179606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:03:29.562572  179606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1028 12:03:29.580402  179606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:03:29.599544  179606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1028 12:03:29.617463  179606 ssh_runner.go:195] Run: grep 192.168.50.142	control-plane.minikube.internal$ /etc/hosts
	I1028 12:03:29.621664  179606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:03:29.635044  179606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:03:29.771091  179606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:03:29.789016  179606 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849 for IP: 192.168.50.142
	I1028 12:03:29.789039  179606 certs.go:194] generating shared ca certs ...
	I1028 12:03:29.789061  179606 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:29.789227  179606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:03:29.789286  179606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:03:29.789302  179606 certs.go:256] generating profile certs ...
	I1028 12:03:29.789371  179606 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.key
	I1028 12:03:29.789388  179606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.crt with IP's: []
	I1028 12:03:29.940379  179606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.crt ...
	I1028 12:03:29.940421  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.crt: {Name:mk11124ee54da9a0950cb680e5c668cade529ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:29.940613  179606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.key ...
	I1028 12:03:29.940629  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.key: {Name:mk62ac0263ff001bc10e6be73e387d1664c7564c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:29.940737  179606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key.da002a8f
	I1028 12:03:29.940768  179606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt.da002a8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.142]
	I1028 12:03:30.064677  179606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt.da002a8f ...
	I1028 12:03:30.064708  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt.da002a8f: {Name:mk4ce899f4c086ab218ed038c42da4ffb709e7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:30.064861  179606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key.da002a8f ...
	I1028 12:03:30.064874  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key.da002a8f: {Name:mk8f879a6499d21f32b569ad1b689c9a30742dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:30.064945  179606 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt.da002a8f -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt
	I1028 12:03:30.065033  179606 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key.da002a8f -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key
	I1028 12:03:30.065110  179606 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.key
	I1028 12:03:30.065127  179606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.crt with IP's: []
	I1028 12:03:30.200994  179606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.crt ...
	I1028 12:03:30.201023  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.crt: {Name:mk9f7392b54c85a8538f45bd3d190e697d4e1af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:30.201183  179606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.key ...
	I1028 12:03:30.201196  179606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.key: {Name:mkddcaa25fc8d09ec161ca8f44f661a4c8e5be64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:03:30.201374  179606 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:03:30.201411  179606 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:03:30.201422  179606 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:03:30.201445  179606 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:03:30.201474  179606 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:03:30.201499  179606 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:03:30.201556  179606 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:03:30.202185  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:03:30.232781  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:03:30.261694  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:03:30.288273  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:03:30.314657  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 12:03:30.341284  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:03:30.368973  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:03:30.407519  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:03:30.439438  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:03:30.466107  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:03:30.493298  179606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:03:30.521836  179606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:03:30.548033  179606 ssh_runner.go:195] Run: openssl version
	I1028 12:03:30.555001  179606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:03:30.569239  179606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:03:30.574747  179606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:03:30.574816  179606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:03:30.581505  179606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:03:30.596064  179606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:03:30.616243  179606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:03:30.622193  179606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:03:30.622264  179606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:03:30.631432  179606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:03:30.653928  179606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:03:30.671158  179606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:30.678411  179606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:30.678468  179606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:03:30.690816  179606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:03:30.713344  179606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:03:30.718662  179606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:03:30.718727  179606 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:03:30.718815  179606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:03:30.718910  179606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:03:30.763348  179606 cri.go:89] found id: ""
	I1028 12:03:30.763427  179606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:03:30.774288  179606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:03:30.785310  179606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:03:30.796182  179606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:03:30.796213  179606 kubeadm.go:157] found existing configuration files:
	
	I1028 12:03:30.796272  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:03:30.808640  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:03:30.808699  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:03:30.827650  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:03:30.839509  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:03:30.839584  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:03:30.850875  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:03:30.862867  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:03:30.862919  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:03:30.875691  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:03:30.890446  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:03:30.890522  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:03:30.901762  179606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:03:31.044394  179606 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:03:31.044735  179606 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:03:31.256040  179606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:03:31.256174  179606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:03:31.256291  179606 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:03:31.464325  179606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:03:31.608012  179606 out.go:235]   - Generating certificates and keys ...
	I1028 12:03:31.608141  179606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:03:31.608226  179606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:03:31.608372  179606 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:03:31.676859  179606 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:03:31.795585  179606 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:03:31.915249  179606 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:03:32.125190  179606 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:03:32.125394  179606 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-337849 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	I1028 12:03:32.718618  179606 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:03:32.718801  179606 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-337849 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	I1028 12:03:32.851172  179606 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:03:33.203224  179606 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:03:33.307331  179606 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:03:33.307422  179606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:03:33.594066  179606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:03:33.917369  179606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:03:34.126045  179606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:03:34.365141  179606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:03:34.391219  179606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:03:34.392402  179606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:03:34.392515  179606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:03:34.527851  179606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:03:34.529802  179606 out.go:235]   - Booting up control plane ...
	I1028 12:03:34.529908  179606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:03:34.542631  179606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:03:34.545106  179606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:03:34.546620  179606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:03:34.553006  179606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:04:14.550538  179606 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:04:14.551314  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:04:14.551569  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:04:19.552250  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:04:19.552538  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:04:29.553048  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:04:29.553301  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:04:49.554934  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:04:49.555222  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:05:29.554582  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:05:29.554872  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:05:29.554906  179606 kubeadm.go:310] 
	I1028 12:05:29.554951  179606 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:05:29.554996  179606 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:05:29.555003  179606 kubeadm.go:310] 
	I1028 12:05:29.555094  179606 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:05:29.555157  179606 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:05:29.555301  179606 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:05:29.555310  179606 kubeadm.go:310] 
	I1028 12:05:29.555435  179606 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:05:29.555526  179606 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:05:29.555575  179606 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:05:29.555588  179606 kubeadm.go:310] 
	I1028 12:05:29.555736  179606 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:05:29.555841  179606 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:05:29.555856  179606 kubeadm.go:310] 
	I1028 12:05:29.556015  179606 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:05:29.556169  179606 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:05:29.556278  179606 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:05:29.556392  179606 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:05:29.556412  179606 kubeadm.go:310] 
	I1028 12:05:29.556600  179606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:05:29.556703  179606 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:05:29.556813  179606 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 12:05:29.556943  179606 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-337849 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-337849 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-337849 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-337849 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:05:29.556978  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:05:31.111707  179606 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.55469455s)
	I1028 12:05:31.111808  179606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:05:31.126706  179606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:05:31.136913  179606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:05:31.136933  179606 kubeadm.go:157] found existing configuration files:
	
	I1028 12:05:31.136987  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:05:31.147301  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:05:31.147361  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:05:31.157302  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:05:31.166639  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:05:31.166716  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:05:31.177106  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:05:31.186754  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:05:31.186814  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:05:31.196746  179606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:05:31.206947  179606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:05:31.207003  179606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:05:31.217246  179606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:05:31.302240  179606 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:05:31.302317  179606 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:05:31.464438  179606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:05:31.464605  179606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:05:31.464763  179606 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:05:31.643615  179606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:05:31.646611  179606 out.go:235]   - Generating certificates and keys ...
	I1028 12:05:31.646716  179606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:05:31.646822  179606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:05:31.646905  179606 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:05:31.646959  179606 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:05:31.647023  179606 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:05:31.647076  179606 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:05:31.647147  179606 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:05:31.647246  179606 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:05:31.647369  179606 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:05:31.647476  179606 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:05:31.647533  179606 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:05:31.647613  179606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:05:31.883755  179606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:05:31.960751  179606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:05:32.198722  179606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:05:32.448685  179606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:05:32.464352  179606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:05:32.465585  179606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:05:32.465657  179606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:05:32.608023  179606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:05:32.609831  179606 out.go:235]   - Booting up control plane ...
	I1028 12:05:32.609945  179606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:05:32.610029  179606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:05:32.610090  179606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:05:32.610680  179606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:05:32.614116  179606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:06:12.616560  179606 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:06:12.616918  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:06:12.617171  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:06:17.617649  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:06:17.617970  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:06:27.618362  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:06:27.618612  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:06:47.619866  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:06:47.620117  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:07:27.619724  179606 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:07:27.619926  179606 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:07:27.619936  179606 kubeadm.go:310] 
	I1028 12:07:27.619998  179606 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:07:27.620077  179606 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:07:27.620092  179606 kubeadm.go:310] 
	I1028 12:07:27.620136  179606 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:07:27.620186  179606 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:07:27.620321  179606 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:07:27.620330  179606 kubeadm.go:310] 
	I1028 12:07:27.620476  179606 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:07:27.620533  179606 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:07:27.620575  179606 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:07:27.620595  179606 kubeadm.go:310] 
	I1028 12:07:27.620728  179606 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:07:27.620825  179606 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:07:27.620836  179606 kubeadm.go:310] 
	I1028 12:07:27.620929  179606 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:07:27.621047  179606 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:07:27.621185  179606 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:07:27.621297  179606 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:07:27.621309  179606 kubeadm.go:310] 
	I1028 12:07:27.621978  179606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:07:27.622079  179606 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:07:27.622173  179606 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:07:27.622412  179606 kubeadm.go:394] duration metric: took 3m56.903686931s to StartCluster
	I1028 12:07:27.622464  179606 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:07:27.622524  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:07:27.673957  179606 cri.go:89] found id: ""
	I1028 12:07:27.673991  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.673998  179606 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:07:27.674006  179606 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:07:27.674070  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:07:27.712875  179606 cri.go:89] found id: ""
	I1028 12:07:27.712903  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.712914  179606 logs.go:284] No container was found matching "etcd"
	I1028 12:07:27.712922  179606 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:07:27.712983  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:07:27.750212  179606 cri.go:89] found id: ""
	I1028 12:07:27.750245  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.750256  179606 logs.go:284] No container was found matching "coredns"
	I1028 12:07:27.750264  179606 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:07:27.750331  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:07:27.788124  179606 cri.go:89] found id: ""
	I1028 12:07:27.788155  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.788166  179606 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:07:27.788175  179606 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:07:27.788225  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:07:27.846172  179606 cri.go:89] found id: ""
	I1028 12:07:27.846208  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.846220  179606 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:07:27.846228  179606 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:07:27.846306  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:07:27.885112  179606 cri.go:89] found id: ""
	I1028 12:07:27.885146  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.885159  179606 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:07:27.885168  179606 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:07:27.885229  179606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:07:27.923442  179606 cri.go:89] found id: ""
	I1028 12:07:27.923470  179606 logs.go:282] 0 containers: []
	W1028 12:07:27.923481  179606 logs.go:284] No container was found matching "kindnet"
	I1028 12:07:27.923494  179606 logs.go:123] Gathering logs for kubelet ...
	I1028 12:07:27.923508  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:07:27.974696  179606 logs.go:123] Gathering logs for dmesg ...
	I1028 12:07:27.974759  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:07:27.989950  179606 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:07:27.989980  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:07:28.125070  179606 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:07:28.125096  179606 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:07:28.125111  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:07:28.236106  179606 logs.go:123] Gathering logs for container status ...
	I1028 12:07:28.236147  179606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1028 12:07:28.280065  179606 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:07:28.280134  179606 out.go:270] * 
	* 
	W1028 12:07:28.280197  179606 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:07:28.280215  179606 out.go:270] * 
	* 
	W1028 12:07:28.281060  179606 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:07:28.284186  179606 out.go:201] 
	W1028 12:07:28.286178  179606 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:07:28.286220  179606 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:07:28.286243  179606 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:07:28.287861  179606 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-337849
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-337849: (6.324953019s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-337849 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-337849 status --format={{.Host}}: exit status 7 (66.876663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1028 12:07:38.998293  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.895318158s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-337849 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.310698ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-337849] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-337849
	    minikube start -p kubernetes-upgrade-337849 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3378492 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-337849 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-337849 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.75639465s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-28 12:08:44.540522297 +0000 UTC m=+4447.208746686
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-337849 -n kubernetes-upgrade-337849
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-337849 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-337849 logs -n 25: (1.662597224s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-606176                                | NoKubernetes-606176       | jenkins | v1.34.0 | 28 Oct 24 12:03 UTC | 28 Oct 24 12:04 UTC |
	|         | --no-kubernetes --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-628680                             | running-upgrade-628680    | jenkins | v1.34.0 | 28 Oct 24 12:03 UTC | 28 Oct 24 12:03 UTC |
	| start   | -p force-systemd-flag-320662                          | force-systemd-flag-320662 | jenkins | v1.34.0 | 28 Oct 24 12:03 UTC | 28 Oct 24 12:05 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-606176 sudo                           | NoKubernetes-606176       | jenkins | v1.34.0 | 28 Oct 24 12:04 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-606176                                | NoKubernetes-606176       | jenkins | v1.34.0 | 28 Oct 24 12:04 UTC | 28 Oct 24 12:04 UTC |
	| start   | -p NoKubernetes-606176                                | NoKubernetes-606176       | jenkins | v1.34.0 | 28 Oct 24 12:04 UTC | 28 Oct 24 12:05 UTC |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-320662 ssh cat                     | force-systemd-flag-320662 | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-320662                          | force-systemd-flag-320662 | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	| start   | -p cert-options-961573                                | cert-options-961573       | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-606176 sudo                           | NoKubernetes-606176       | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-606176                                | NoKubernetes-606176       | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	| start   | -p old-k8s-version-089993                             | old-k8s-version-089993    | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | cert-options-961573 ssh                               | cert-options-961573       | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-961573 -- sudo                        | cert-options-961573       | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-961573                                | cert-options-961573       | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:05 UTC |
	| start   | -p no-preload-871884                                  | no-preload-871884         | jenkins | v1.34.0 | 28 Oct 24 12:05 UTC | 28 Oct 24 12:07 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-601400                             | cert-expiration-601400    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-337849                          | kubernetes-upgrade-337849 | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                          | kubernetes-upgrade-337849 | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                             | cert-expiration-601400    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                 | embed-certs-709250        | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884            | no-preload-871884         | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-871884                                  | no-preload-871884         | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                          | kubernetes-upgrade-337849 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                          | kubernetes-upgrade-337849 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:08:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:08:13.827209  184037 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:08:13.827346  184037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:08:13.827359  184037 out.go:358] Setting ErrFile to fd 2...
	I1028 12:08:13.827366  184037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:08:13.827581  184037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:08:13.828156  184037 out.go:352] Setting JSON to false
	I1028 12:08:13.829110  184037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6637,"bootTime":1730110657,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:08:13.829212  184037 start.go:139] virtualization: kvm guest
	I1028 12:08:13.831102  184037 out.go:177] * [kubernetes-upgrade-337849] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:08:13.832399  184037 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:08:13.832439  184037 notify.go:220] Checking for updates...
	I1028 12:08:13.835165  184037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:08:13.836453  184037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:08:13.837736  184037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:08:13.839085  184037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:08:13.840484  184037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:08:13.842295  184037 config.go:182] Loaded profile config "kubernetes-upgrade-337849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:08:13.842694  184037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:08:13.842738  184037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:08:13.858045  184037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I1028 12:08:13.858619  184037 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:08:13.859186  184037 main.go:141] libmachine: Using API Version  1
	I1028 12:08:13.859206  184037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:08:13.859659  184037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:08:13.859861  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:13.860134  184037 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:08:13.860464  184037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:08:13.860532  184037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:08:13.875746  184037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I1028 12:08:13.876232  184037 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:08:13.876734  184037 main.go:141] libmachine: Using API Version  1
	I1028 12:08:13.876757  184037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:08:13.877047  184037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:08:13.877203  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:13.917003  184037 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:08:13.918349  184037 start.go:297] selected driver: kvm2
	I1028 12:08:13.918369  184037 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:08:13.918517  184037 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:08:13.919546  184037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:08:13.919643  184037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:08:13.935884  184037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:08:13.936317  184037 cni.go:84] Creating CNI manager for ""
	I1028 12:08:13.936372  184037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:08:13.936412  184037 start.go:340] cluster config:
	{Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-337849 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:08:13.936526  184037 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:08:13.939371  184037 out.go:177] * Starting "kubernetes-upgrade-337849" primary control-plane node in "kubernetes-upgrade-337849" cluster
	I1028 12:08:13.966203  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:08:13.966447  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:08:13.966461  182116 kubeadm.go:310] 
	I1028 12:08:13.966495  182116 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:08:13.966559  182116 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:08:13.966587  182116 kubeadm.go:310] 
	I1028 12:08:13.966619  182116 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:08:13.966668  182116 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:08:13.966777  182116 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:08:13.966789  182116 kubeadm.go:310] 
	I1028 12:08:13.966896  182116 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:08:13.966926  182116 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:08:13.966955  182116 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:08:13.966961  182116 kubeadm.go:310] 
	I1028 12:08:13.967061  182116 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:08:13.967192  182116 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:08:13.967218  182116 kubeadm.go:310] 
	I1028 12:08:13.967372  182116 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:08:13.967457  182116 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:08:13.967580  182116 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:08:13.967693  182116 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:08:13.967706  182116 kubeadm.go:310] 
	I1028 12:08:13.968111  182116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:08:13.968202  182116 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:08:13.968280  182116 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 12:08:13.968415  182116 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:08:13.968460  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:08:15.223161  182116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.254671159s)
	I1028 12:08:15.223268  182116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:08:15.238671  182116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:08:15.249209  182116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:08:15.249230  182116 kubeadm.go:157] found existing configuration files:
	
	I1028 12:08:15.249281  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:08:15.262031  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:08:15.262109  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:08:15.275345  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:08:15.287372  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:08:15.287432  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:08:15.297469  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:08:15.307195  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:08:15.307253  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:08:15.319228  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:08:15.330066  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:08:15.330123  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:08:15.339996  182116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:08:15.419757  182116 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:08:15.419879  182116 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:08:15.574408  182116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:08:15.574600  182116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:08:15.574740  182116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:08:15.786032  182116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:08:15.787564  182116 out.go:235]   - Generating certificates and keys ...
	I1028 12:08:15.787698  182116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:08:15.787792  182116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:08:15.787896  182116 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:08:15.787976  182116 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:08:15.788069  182116 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:08:15.788142  182116 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:08:15.788229  182116 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:08:15.788347  182116 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:08:15.788495  182116 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:08:15.788612  182116 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:08:15.788666  182116 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:08:15.788742  182116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:08:15.928496  182116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:08:16.123628  182116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:08:16.703518  182116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:08:17.024252  182116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:08:17.046272  182116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:08:17.047494  182116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:08:17.047564  182116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:08:17.201776  182116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:08:14.945503  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:14.946042  183636 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:08:14.946066  183636 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:08:14.945998  183676 retry.go:31] will retry after 3.559173578s: waiting for machine to come up
	I1028 12:08:13.940731  184037 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:08:13.940786  184037 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:08:13.940796  184037 cache.go:56] Caching tarball of preloaded images
	I1028 12:08:13.940898  184037 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:08:13.940914  184037 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:08:13.941031  184037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/config.json ...
	I1028 12:08:13.941271  184037 start.go:360] acquireMachinesLock for kubernetes-upgrade-337849: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:08:20.219115  184037 start.go:364] duration metric: took 6.277802321s to acquireMachinesLock for "kubernetes-upgrade-337849"
	I1028 12:08:20.219172  184037 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:08:20.219180  184037 fix.go:54] fixHost starting: 
	I1028 12:08:20.219607  184037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:08:20.219659  184037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:08:20.237196  184037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I1028 12:08:20.237729  184037 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:08:20.238206  184037 main.go:141] libmachine: Using API Version  1
	I1028 12:08:20.238229  184037 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:08:20.238536  184037 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:08:20.238722  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:20.238862  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetState
	I1028 12:08:20.240541  184037 fix.go:112] recreateIfNeeded on kubernetes-upgrade-337849: state=Running err=<nil>
	W1028 12:08:20.240562  184037 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:08:20.242675  184037 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-337849" VM ...
	I1028 12:08:17.203693  182116 out.go:235]   - Booting up control plane ...
	I1028 12:08:17.203858  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:08:17.208480  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:08:17.209391  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:08:17.211757  182116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:08:17.214470  182116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:08:18.506402  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.506945  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.506977  183636 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:08:18.506985  183636 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:08:18.507383  183636 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250
	I1028 12:08:18.588227  183636 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:08:18.588254  183636 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:08:18.588267  183636 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:08:18.591214  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.591738  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:18.591773  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.591894  183636 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:08:18.591919  183636 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:08:18.591950  183636 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:08:18.591964  183636 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:08:18.591977  183636 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:08:18.718257  183636 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:08:18.718568  183636 main.go:141] libmachine: (embed-certs-709250) KVM machine creation complete!
	I1028 12:08:18.718915  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:08:18.719535  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:18.719739  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:18.719909  183636 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:08:18.719928  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:08:18.721338  183636 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:08:18.721353  183636 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:08:18.721360  183636 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:08:18.721367  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:18.723672  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.724032  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:18.724085  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.724235  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:18.724421  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:18.724586  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:18.724748  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:18.724902  183636 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:18.725100  183636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:08:18.725112  183636 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:08:18.829248  183636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:08:18.829274  183636 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:08:18.829282  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:18.832530  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.833042  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:18.833082  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.833333  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:18.833588  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:18.833812  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:18.833966  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:18.834147  183636 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:18.834348  183636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:08:18.834359  183636 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:08:18.943031  183636 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:08:18.943096  183636 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:08:18.943104  183636 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:08:18.943115  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:08:18.943414  183636 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:08:18.943445  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:08:18.943644  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:18.946812  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.947224  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:18.947245  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:18.947435  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:18.947621  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:18.947755  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:18.947867  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:18.948072  183636 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:18.948250  183636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:08:18.948261  183636 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:08:19.070295  183636 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:08:19.070321  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:19.072989  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.073286  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.073317  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.073588  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:19.073803  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:19.073972  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:19.074154  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:19.074323  183636 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:19.074561  183636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:08:19.074588  183636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:08:19.187444  183636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:08:19.187478  183636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:08:19.187504  183636 buildroot.go:174] setting up certificates
	I1028 12:08:19.187517  183636 provision.go:84] configureAuth start
	I1028 12:08:19.187531  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:08:19.187792  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:08:19.190280  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.190607  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.190636  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.190753  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:19.192943  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.193270  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.193315  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.193473  183636 provision.go:143] copyHostCerts
	I1028 12:08:19.193551  183636 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:08:19.193571  183636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:08:19.193652  183636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:08:19.193785  183636 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:08:19.193798  183636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:08:19.193828  183636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:08:19.193918  183636 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:08:19.193936  183636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:08:19.193965  183636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:08:19.194046  183636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:08:19.558610  183636 provision.go:177] copyRemoteCerts
	I1028 12:08:19.558680  183636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:08:19.558713  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:19.561707  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.562015  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.562048  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.562215  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:19.562389  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:19.562532  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:19.562652  183636 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:08:19.646442  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:08:19.673776  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:08:19.698468  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:08:19.723456  183636 provision.go:87] duration metric: took 535.923654ms to configureAuth
	I1028 12:08:19.723485  183636 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:08:19.723690  183636 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:08:19.723784  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:19.726498  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.726968  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.727001  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.727213  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:19.727418  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:19.727656  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:19.727781  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:19.727938  183636 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:19.728115  183636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:08:19.728129  183636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:08:19.964768  183636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:08:19.964798  183636 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:08:19.964807  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetURL
	I1028 12:08:19.966239  183636 main.go:141] libmachine: (embed-certs-709250) DBG | Using libvirt version 6000000
	I1028 12:08:19.968720  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.969038  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.969074  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.969295  183636 main.go:141] libmachine: Docker is up and running!
	I1028 12:08:19.969309  183636 main.go:141] libmachine: Reticulating splines...
	I1028 12:08:19.969316  183636 client.go:171] duration metric: took 24.692068892s to LocalClient.Create
	I1028 12:08:19.969338  183636 start.go:167] duration metric: took 24.692136537s to libmachine.API.Create "embed-certs-709250"
	I1028 12:08:19.969347  183636 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:08:19.969362  183636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:08:19.969388  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:19.969643  183636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:08:19.969674  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:19.971735  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.972052  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:19.972080  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:19.972197  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:19.972360  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:19.972485  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:19.972618  183636 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:08:20.057046  183636 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:08:20.061845  183636 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:08:20.061901  183636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:08:20.062043  183636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:08:20.062188  183636 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:08:20.062318  183636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:08:20.072649  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:08:20.102099  183636 start.go:296] duration metric: took 132.732829ms for postStartSetup
	I1028 12:08:20.102154  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:08:20.102793  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:08:20.105491  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.105847  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:20.105886  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.106183  183636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:08:20.106389  183636 start.go:128] duration metric: took 24.85136952s to createHost
	I1028 12:08:20.106417  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:20.108479  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.108787  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:20.108815  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.109054  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:20.109224  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:20.109358  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:20.109459  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:20.109601  183636 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:20.109797  183636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:08:20.109824  183636 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:08:20.218960  183636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117300.188509758
	
	I1028 12:08:20.218984  183636 fix.go:216] guest clock: 1730117300.188509758
	I1028 12:08:20.218994  183636 fix.go:229] Guest: 2024-10-28 12:08:20.188509758 +0000 UTC Remote: 2024-10-28 12:08:20.106401371 +0000 UTC m=+27.080490429 (delta=82.108387ms)
	I1028 12:08:20.219019  183636 fix.go:200] guest clock delta is within tolerance: 82.108387ms
	I1028 12:08:20.219027  183636 start.go:83] releasing machines lock for "embed-certs-709250", held for 24.964224109s
	I1028 12:08:20.219072  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:20.219384  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:08:20.222188  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.222573  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:20.222602  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.222841  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:20.223364  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:20.223568  183636 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:08:20.223654  183636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:08:20.223698  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:20.223794  183636 ssh_runner.go:195] Run: cat /version.json
	I1028 12:08:20.223818  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:08:20.226666  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.226697  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.227039  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:20.227068  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.227096  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:20.227113  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:20.227264  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:20.227413  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:08:20.227486  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:20.227578  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:08:20.227728  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:20.227767  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:08:20.227872  183636 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:08:20.227874  183636 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:08:20.331851  183636 ssh_runner.go:195] Run: systemctl --version
	I1028 12:08:20.338390  183636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:08:20.507789  183636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:08:20.515430  183636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:08:20.515507  183636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:08:20.532553  183636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:08:20.532585  183636 start.go:495] detecting cgroup driver to use...
	I1028 12:08:20.532657  183636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:08:20.552022  183636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:08:20.568721  183636 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:08:20.568792  183636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:08:20.584788  183636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:08:20.599743  183636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:08:20.723413  183636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:08:20.869124  183636 docker.go:233] disabling docker service ...
	I1028 12:08:20.869203  183636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:08:20.885614  183636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:08:20.901042  183636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:08:21.054755  183636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:08:21.181608  183636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:08:21.205731  183636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:08:21.225189  183636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:08:21.225256  183636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.236160  183636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:08:21.236235  183636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.247179  183636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.257925  183636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.268375  183636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:08:21.279022  183636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.289401  183636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.307393  183636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:21.318441  183636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:08:21.328539  183636 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:08:21.328625  183636 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:08:21.343120  183636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:08:21.353756  183636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:08:21.479605  183636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:08:21.573894  183636 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:08:21.573974  183636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:08:21.578967  183636 start.go:563] Will wait 60s for crictl version
	I1028 12:08:21.579030  183636 ssh_runner.go:195] Run: which crictl
	I1028 12:08:21.582935  183636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:08:21.623699  183636 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:08:21.623783  183636 ssh_runner.go:195] Run: crio --version
	I1028 12:08:21.656131  183636 ssh_runner.go:195] Run: crio --version
	I1028 12:08:21.687205  183636 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:08:21.688776  183636 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:08:21.691511  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:21.691842  183636 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:08:21.691867  183636 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:08:21.692074  183636 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:08:21.696429  183636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:08:21.709681  183636 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:08:21.709784  183636 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:08:21.709824  183636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:08:21.747794  183636 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:08:21.747858  183636 ssh_runner.go:195] Run: which lz4
	I1028 12:08:21.752033  183636 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:08:21.756346  183636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:08:21.756375  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:08:20.244386  184037 machine.go:93] provisionDockerMachine start ...
	I1028 12:08:20.244417  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:20.244676  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:20.247467  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.247945  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.247994  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.248263  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:20.248424  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.248582  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.248725  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:20.248865  184037 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:20.249161  184037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:08:20.249177  184037 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:08:20.363086  184037 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-337849
	
	I1028 12:08:20.363120  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:08:20.363374  184037 buildroot.go:166] provisioning hostname "kubernetes-upgrade-337849"
	I1028 12:08:20.363394  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:08:20.363577  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:20.366926  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.367339  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.367369  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.367549  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:20.367738  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.367915  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.368092  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:20.368255  184037 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:20.368487  184037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:08:20.368506  184037 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-337849 && echo "kubernetes-upgrade-337849" | sudo tee /etc/hostname
	I1028 12:08:20.503376  184037 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-337849
	
	I1028 12:08:20.503410  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:20.506817  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.507175  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.507206  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.507394  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:20.507599  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.507795  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.507968  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:20.508203  184037 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:20.508401  184037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:08:20.508416  184037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-337849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-337849/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-337849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:08:20.624086  184037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:08:20.624118  184037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:08:20.624165  184037 buildroot.go:174] setting up certificates
	I1028 12:08:20.624179  184037 provision.go:84] configureAuth start
	I1028 12:08:20.624194  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetMachineName
	I1028 12:08:20.624468  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:08:20.627262  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.627564  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.627592  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.627757  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:20.630446  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.630824  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.630849  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.630973  184037 provision.go:143] copyHostCerts
	I1028 12:08:20.631043  184037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:08:20.631060  184037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:08:20.631132  184037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:08:20.631262  184037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:08:20.631275  184037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:08:20.631314  184037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:08:20.631415  184037 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:08:20.631427  184037 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:08:20.631464  184037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:08:20.631541  184037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-337849 san=[127.0.0.1 192.168.50.142 kubernetes-upgrade-337849 localhost minikube]
	I1028 12:08:20.764875  184037 provision.go:177] copyRemoteCerts
	I1028 12:08:20.764929  184037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:08:20.764953  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:20.767878  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.768193  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.768223  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.768385  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:20.768573  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.768715  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:20.768870  184037 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:08:20.856257  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:08:20.886746  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1028 12:08:20.919959  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:08:20.952817  184037 provision.go:87] duration metric: took 328.619419ms to configureAuth
	I1028 12:08:20.952846  184037 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:08:20.953029  184037 config.go:182] Loaded profile config "kubernetes-upgrade-337849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:08:20.953108  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:20.956252  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.956782  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:20.956828  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:20.957052  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:20.957242  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.957467  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:20.957699  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:20.957884  184037 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:20.958116  184037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:08:20.958141  184037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:08:23.202442  183636 crio.go:462] duration metric: took 1.450443932s to copy over tarball
	I1028 12:08:23.202515  183636 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:08:25.309256  183636 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.106712699s)
	I1028 12:08:25.309282  183636 crio.go:469] duration metric: took 2.106810069s to extract the tarball
	I1028 12:08:25.309289  183636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:08:25.346807  183636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:08:25.392998  183636 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:08:25.393023  183636 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:08:25.393031  183636 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:08:25.393121  183636 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:08:25.393184  183636 ssh_runner.go:195] Run: crio config
	I1028 12:08:25.444459  183636 cni.go:84] Creating CNI manager for ""
	I1028 12:08:25.444483  183636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:08:25.444494  183636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:08:25.444516  183636 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:08:25.444638  183636 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:08:25.444704  183636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:08:25.454837  183636 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:08:25.454912  183636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:08:25.464644  183636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:08:25.482057  183636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:08:25.499556  183636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:08:25.516376  183636 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:08:25.520386  183636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:08:25.533117  183636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:08:25.655476  183636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:08:25.673444  183636 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:08:25.673467  183636 certs.go:194] generating shared ca certs ...
	I1028 12:08:25.673481  183636 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:25.673657  183636 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:08:25.673706  183636 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:08:25.673721  183636 certs.go:256] generating profile certs ...
	I1028 12:08:25.673787  183636 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:08:25.673808  183636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.crt with IP's: []
	I1028 12:08:25.866415  183636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.crt ...
	I1028 12:08:25.866442  183636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.crt: {Name:mkc31c1661bc2334daa96d51e00b25ed423fc62c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:25.866608  183636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key ...
	I1028 12:08:25.866619  183636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key: {Name:mk51722a8225e9e2505d17f22a4270154f3feebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:25.866696  183636 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:08:25.866712  183636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt.20eef9ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I1028 12:08:26.071369  183636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt.20eef9ce ...
	I1028 12:08:26.071404  183636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt.20eef9ce: {Name:mkf6e7ab37a9a9c56f3a13bd5dd439ca78b8ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:26.071571  183636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce ...
	I1028 12:08:26.071583  183636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce: {Name:mk1d97dd47c706f28a89c0ba4a0d7507b5369828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:26.071658  183636 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt.20eef9ce -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt
	I1028 12:08:26.071747  183636 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key
	I1028 12:08:26.071804  183636 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:08:26.071820  183636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt with IP's: []
	I1028 12:08:26.167758  183636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt ...
	I1028 12:08:26.167787  183636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt: {Name:mkf0299337c889d51fa411d2e6b24874d4714b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:26.167944  183636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key ...
	I1028 12:08:26.167960  183636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key: {Name:mk723823170f86f22025c73fb9d8d2ec2fa35a91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:26.168159  183636 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:08:26.168203  183636 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:08:26.168218  183636 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:08:26.168255  183636 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:08:26.168293  183636 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:08:26.168341  183636 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:08:26.168397  183636 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:08:26.169026  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:08:26.196002  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:08:26.228238  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:08:26.255008  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:08:26.281169  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:08:26.306712  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:08:26.336261  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:08:26.367189  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:08:26.396056  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:08:26.421803  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:08:26.450333  183636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:08:26.474967  183636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:08:26.494895  183636 ssh_runner.go:195] Run: openssl version
	I1028 12:08:26.502142  183636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:08:26.513937  183636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:08:26.519187  183636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:08:26.519257  183636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:08:26.525634  183636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:08:26.537354  183636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:08:26.549114  183636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:08:26.554085  183636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:08:26.554151  183636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:08:26.562536  183636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:08:26.574269  183636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:08:26.585502  183636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:08:26.590306  183636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:08:26.590378  183636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:08:26.596320  183636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:08:26.607762  183636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:08:26.612247  183636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:08:26.612314  183636 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:08:26.612416  183636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:08:26.612495  183636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:08:26.667526  183636 cri.go:89] found id: ""
	I1028 12:08:26.667610  183636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:08:26.681543  183636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:08:26.695199  183636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:08:26.708116  183636 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:08:26.708141  183636 kubeadm.go:157] found existing configuration files:
	
	I1028 12:08:26.708197  183636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:08:26.722189  183636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:08:26.722270  183636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:08:26.734065  183636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:08:26.744656  183636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:08:26.744732  183636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:08:26.755925  183636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:08:26.768501  183636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:08:26.768578  183636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:08:26.781813  183636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:08:26.794089  183636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:08:26.794156  183636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:08:26.807090  183636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:08:27.049953  183636 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:08:27.052369  184037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:08:27.052397  184037 machine.go:96] duration metric: took 6.807989627s to provisionDockerMachine
	I1028 12:08:27.052412  184037 start.go:293] postStartSetup for "kubernetes-upgrade-337849" (driver="kvm2")
	I1028 12:08:27.052426  184037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:08:27.052452  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:27.052921  184037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:08:27.052959  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:27.056036  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.056379  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:27.056415  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.056603  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:27.056843  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:27.057056  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:27.057233  184037 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:08:27.149057  184037 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:08:27.154531  184037 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:08:27.154565  184037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:08:27.154639  184037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:08:27.154752  184037 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:08:27.154924  184037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:08:27.167010  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:08:27.193519  184037 start.go:296] duration metric: took 141.090695ms for postStartSetup
	I1028 12:08:27.193585  184037 fix.go:56] duration metric: took 6.974406052s for fixHost
	I1028 12:08:27.193605  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:27.196739  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.197068  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:27.197096  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.197238  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:27.197446  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:27.197663  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:27.197832  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:27.197997  184037 main.go:141] libmachine: Using SSH client type: native
	I1028 12:08:27.198235  184037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1028 12:08:27.198251  184037 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:08:27.311115  184037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117307.302004187
	
	I1028 12:08:27.311147  184037 fix.go:216] guest clock: 1730117307.302004187
	I1028 12:08:27.311154  184037 fix.go:229] Guest: 2024-10-28 12:08:27.302004187 +0000 UTC Remote: 2024-10-28 12:08:27.193588963 +0000 UTC m=+13.405380151 (delta=108.415224ms)
	I1028 12:08:27.311183  184037 fix.go:200] guest clock delta is within tolerance: 108.415224ms
	I1028 12:08:27.311190  184037 start.go:83] releasing machines lock for "kubernetes-upgrade-337849", held for 7.092041165s
	I1028 12:08:27.311212  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:27.311462  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:08:27.314354  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.314768  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:27.314804  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.314934  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:27.315535  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:27.315733  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .DriverName
	I1028 12:08:27.315842  184037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:08:27.315906  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:27.315971  184037 ssh_runner.go:195] Run: cat /version.json
	I1028 12:08:27.316000  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHHostname
	I1028 12:08:27.318939  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.319154  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.319376  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:27.319406  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.319613  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:27.319726  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:27.319794  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:27.319841  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:27.319970  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHPort
	I1028 12:08:27.320044  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:27.320204  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHKeyPath
	I1028 12:08:27.320203  184037 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:08:27.320363  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetSSHUsername
	I1028 12:08:27.320505  184037 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/kubernetes-upgrade-337849/id_rsa Username:docker}
	I1028 12:08:27.426531  184037 ssh_runner.go:195] Run: systemctl --version
	I1028 12:08:27.433443  184037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:08:27.592622  184037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:08:27.600048  184037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:08:27.600131  184037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:08:27.610225  184037 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:08:27.610254  184037 start.go:495] detecting cgroup driver to use...
	I1028 12:08:27.610341  184037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:08:27.627369  184037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:08:27.646785  184037 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:08:27.646839  184037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:08:27.664257  184037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:08:27.678706  184037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:08:27.820514  184037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:08:27.964944  184037 docker.go:233] disabling docker service ...
	I1028 12:08:27.965148  184037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:08:27.982486  184037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:08:27.997361  184037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:08:28.145635  184037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:08:28.302605  184037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:08:28.317507  184037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:08:28.339689  184037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:08:28.339793  184037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.352393  184037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:08:28.352469  184037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.364521  184037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.376224  184037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.387914  184037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:08:28.400320  184037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.412279  184037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.426332  184037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:08:28.440911  184037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:08:28.451671  184037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:08:28.462600  184037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:08:28.607621  184037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:08:33.676357  184037 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.06869044s)
	I1028 12:08:33.676403  184037 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:08:33.676469  184037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:08:33.682729  184037 start.go:563] Will wait 60s for crictl version
	I1028 12:08:33.682800  184037 ssh_runner.go:195] Run: which crictl
	I1028 12:08:33.687938  184037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:08:33.730008  184037 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:08:33.730116  184037 ssh_runner.go:195] Run: crio --version
	I1028 12:08:33.768233  184037 ssh_runner.go:195] Run: crio --version
	I1028 12:08:33.810731  184037 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:08:33.812129  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) Calling .GetIP
	I1028 12:08:33.815424  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:33.815905  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:94:dd", ip: ""} in network mk-kubernetes-upgrade-337849: {Iface:virbr2 ExpiryTime:2024-10-28 13:07:46 +0000 UTC Type:0 Mac:52:54:00:a5:94:dd Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:kubernetes-upgrade-337849 Clientid:01:52:54:00:a5:94:dd}
	I1028 12:08:33.815940  184037 main.go:141] libmachine: (kubernetes-upgrade-337849) DBG | domain kubernetes-upgrade-337849 has defined IP address 192.168.50.142 and MAC address 52:54:00:a5:94:dd in network mk-kubernetes-upgrade-337849
	I1028 12:08:33.816188  184037 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:08:33.821146  184037 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:08:33.821297  184037 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:08:33.821376  184037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:08:37.029709  183636 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:08:37.029789  183636 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:08:37.029930  183636 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:08:37.030091  183636 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:08:37.030226  183636 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:08:37.030312  183636 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:08:37.032068  183636 out.go:235]   - Generating certificates and keys ...
	I1028 12:08:37.032167  183636 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:08:37.032248  183636 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:08:37.032369  183636 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:08:37.032459  183636 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:08:37.032548  183636 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:08:37.032622  183636 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:08:37.032701  183636 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:08:37.032859  183636 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-709250 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1028 12:08:37.032938  183636 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:08:37.033111  183636 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-709250 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1028 12:08:37.033228  183636 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:08:37.033336  183636 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:08:37.033415  183636 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:08:37.033489  183636 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:08:37.033582  183636 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:08:37.033689  183636 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:08:37.033771  183636 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:08:37.033864  183636 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:08:37.033943  183636 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:08:37.034072  183636 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:08:37.034163  183636 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:08:37.035822  183636 out.go:235]   - Booting up control plane ...
	I1028 12:08:37.035963  183636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:08:37.036080  183636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:08:37.036194  183636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:08:37.036355  183636 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:08:37.036499  183636 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:08:37.036575  183636 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:08:37.036750  183636 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:08:37.036915  183636 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:08:37.036978  183636 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.341404ms
	I1028 12:08:37.037081  183636 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:08:37.037179  183636 kubeadm.go:310] [api-check] The API server is healthy after 5.00220305s
	I1028 12:08:37.037283  183636 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:08:37.037392  183636 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:08:37.037441  183636 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:08:37.037702  183636 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:08:37.037806  183636 kubeadm.go:310] [bootstrap-token] Using token: bjbg1d.g2vd4dt9n365q07g
	I1028 12:08:37.039467  183636 out.go:235]   - Configuring RBAC rules ...
	I1028 12:08:37.039628  183636 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:08:37.039743  183636 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:08:37.039940  183636 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:08:37.040112  183636 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:08:37.040270  183636 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:08:37.040394  183636 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:08:37.040576  183636 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:08:37.040654  183636 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:08:37.040720  183636 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:08:37.040730  183636 kubeadm.go:310] 
	I1028 12:08:37.040838  183636 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:08:37.040856  183636 kubeadm.go:310] 
	I1028 12:08:37.040997  183636 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:08:37.041012  183636 kubeadm.go:310] 
	I1028 12:08:37.041044  183636 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:08:37.041127  183636 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:08:37.041191  183636 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:08:37.041200  183636 kubeadm.go:310] 
	I1028 12:08:37.041270  183636 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:08:37.041278  183636 kubeadm.go:310] 
	I1028 12:08:37.041358  183636 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:08:37.041380  183636 kubeadm.go:310] 
	I1028 12:08:37.041454  183636 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:08:37.041584  183636 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:08:37.041675  183636 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:08:37.041686  183636 kubeadm.go:310] 
	I1028 12:08:37.041793  183636 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:08:37.041864  183636 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:08:37.041870  183636 kubeadm.go:310] 
	I1028 12:08:37.041943  183636 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bjbg1d.g2vd4dt9n365q07g \
	I1028 12:08:37.042088  183636 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:08:37.042122  183636 kubeadm.go:310] 	--control-plane 
	I1028 12:08:37.042132  183636 kubeadm.go:310] 
	I1028 12:08:37.042262  183636 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:08:37.042278  183636 kubeadm.go:310] 
	I1028 12:08:37.042379  183636 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bjbg1d.g2vd4dt9n365q07g \
	I1028 12:08:37.042524  183636 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:08:37.042545  183636 cni.go:84] Creating CNI manager for ""
	I1028 12:08:37.042554  183636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:08:37.044311  183636 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:08:37.045729  183636 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:08:37.058621  183636 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:08:37.084025  183636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:08:37.084103  183636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:08:37.084128  183636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_08_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:08:37.120194  183636 ops.go:34] apiserver oom_adj: -16
	I1028 12:08:37.319964  183636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:08:37.820922  183636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:08:33.871595  184037 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:08:33.871623  184037 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:08:33.871683  184037 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:08:33.910586  184037 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:08:33.910615  184037 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:08:33.910624  184037 kubeadm.go:934] updating node { 192.168.50.142 8443 v1.31.2 crio true true} ...
	I1028 12:08:33.910733  184037 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-337849 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:08:33.910798  184037 ssh_runner.go:195] Run: crio config
	I1028 12:08:33.959549  184037 cni.go:84] Creating CNI manager for ""
	I1028 12:08:33.959569  184037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:08:33.959580  184037 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:08:33.959604  184037 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.142 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-337849 NodeName:kubernetes-upgrade-337849 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:08:33.959746  184037 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-337849"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.142"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.142"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:08:33.959815  184037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:08:33.974494  184037 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:08:33.974580  184037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:08:33.988243  184037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1028 12:08:34.011510  184037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:08:34.029982  184037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:08:34.049348  184037 ssh_runner.go:195] Run: grep 192.168.50.142	control-plane.minikube.internal$ /etc/hosts
	I1028 12:08:34.053898  184037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:08:34.209257  184037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:08:34.230079  184037 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849 for IP: 192.168.50.142
	I1028 12:08:34.230109  184037 certs.go:194] generating shared ca certs ...
	I1028 12:08:34.230130  184037 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:08:34.230374  184037 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:08:34.230426  184037 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:08:34.230438  184037 certs.go:256] generating profile certs ...
	I1028 12:08:34.230526  184037 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/client.key
	I1028 12:08:34.230572  184037 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key.da002a8f
	I1028 12:08:34.230606  184037 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.key
	I1028 12:08:34.230712  184037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:08:34.230743  184037 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:08:34.230754  184037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:08:34.230778  184037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:08:34.230801  184037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:08:34.230824  184037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:08:34.230862  184037 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:08:34.231486  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:08:34.260060  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:08:34.287465  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:08:34.313278  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:08:34.339046  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1028 12:08:34.365339  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:08:34.392518  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:08:34.424356  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/kubernetes-upgrade-337849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:08:34.455949  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:08:34.487448  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:08:34.515606  184037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:08:34.541839  184037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:08:34.559174  184037 ssh_runner.go:195] Run: openssl version
	I1028 12:08:34.565213  184037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:08:34.576731  184037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:08:34.581752  184037 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:08:34.581817  184037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:08:34.587910  184037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:08:34.598638  184037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:08:34.610233  184037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:08:34.617577  184037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:08:34.617644  184037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:08:34.626063  184037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:08:34.642435  184037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:08:34.661036  184037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:08:34.666219  184037 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:08:34.666284  184037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:08:34.672785  184037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:08:34.685223  184037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:08:34.690269  184037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:08:34.698306  184037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:08:34.704828  184037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:08:34.711457  184037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:08:34.718345  184037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:08:34.726495  184037 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:08:34.732947  184037 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-337849 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-337849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:08:34.733060  184037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:08:34.733125  184037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:08:34.778280  184037 cri.go:89] found id: "0fdcee8ed92d986c395c11869c8dc9691ce9e4472629ab239f210abe95196b04"
	I1028 12:08:34.778301  184037 cri.go:89] found id: "96c1218d7e859711c8e8cf3ee052d186bae3bf586fc406885d33578f7e350a74"
	I1028 12:08:34.778305  184037 cri.go:89] found id: "83a146aeac979bd8aa14b7c0fada6bc9eb5d7d95fac2729c71fcfc6b09aef647"
	I1028 12:08:34.778308  184037 cri.go:89] found id: "0148919b2b612175d8a25080962a4ca401caad7773864ced0a47baca5b788039"
	I1028 12:08:34.778312  184037 cri.go:89] found id: "d30c3760a8a1197fcf98420849a2f63da8d8fd6aaacfa365d9fe30fdbe18102e"
	I1028 12:08:34.778316  184037 cri.go:89] found id: "9730a50c5bd0594625a649a03286ff553d30b33752f6f5cf16a3b6aca5933391"
	I1028 12:08:34.778327  184037 cri.go:89] found id: "4cb4034b66494eafa0178c4d8355957c9bc003a85487f6cf6ce9bc166e83a15e"
	I1028 12:08:34.778331  184037 cri.go:89] found id: "b4e18c75726dfc90d80a9c53845a39b79217ec566360196154e3672da7a908a4"
	I1028 12:08:34.778335  184037 cri.go:89] found id: ""
	I1028 12:08:34.778391  184037 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-337849 -n kubernetes-upgrade-337849
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-337849 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-337849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-337849
--- FAIL: TestKubernetesUpgrade (356.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (91.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-729494 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-729494 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.566904121s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-729494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-729494" primary control-plane node in "pause-729494" cluster
	* Updating the running kvm2 "pause-729494" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-729494" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:01:19.416069  176084 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:01:19.416367  176084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:01:19.416379  176084 out.go:358] Setting ErrFile to fd 2...
	I1028 12:01:19.416383  176084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:01:19.416587  176084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:01:19.417135  176084 out.go:352] Setting JSON to false
	I1028 12:01:19.418248  176084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6222,"bootTime":1730110657,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:01:19.418318  176084 start.go:139] virtualization: kvm guest
	I1028 12:01:19.420778  176084 out.go:177] * [pause-729494] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:01:19.422943  176084 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:01:19.422946  176084 notify.go:220] Checking for updates...
	I1028 12:01:19.424646  176084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:01:19.429314  176084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:01:19.431329  176084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:01:19.433024  176084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:01:19.434510  176084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:01:19.436877  176084 config.go:182] Loaded profile config "pause-729494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:01:19.437633  176084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:01:19.437729  176084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:01:19.457607  176084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I1028 12:01:19.458130  176084 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:01:19.458763  176084 main.go:141] libmachine: Using API Version  1
	I1028 12:01:19.458787  176084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:01:19.459141  176084 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:01:19.459385  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:01:19.459701  176084 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:01:19.460081  176084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:01:19.460140  176084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:01:19.475666  176084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I1028 12:01:19.476172  176084 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:01:19.476797  176084 main.go:141] libmachine: Using API Version  1
	I1028 12:01:19.476830  176084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:01:19.477192  176084 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:01:19.477368  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:01:19.515058  176084 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:01:19.516415  176084 start.go:297] selected driver: kvm2
	I1028 12:01:19.516430  176084 start.go:901] validating driver "kvm2" against &{Name:pause-729494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-729494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:01:19.516603  176084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:01:19.516940  176084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:01:19.517031  176084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:01:19.533628  176084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:01:19.534463  176084 cni.go:84] Creating CNI manager for ""
	I1028 12:01:19.534531  176084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:01:19.534585  176084 start.go:340] cluster config:
	{Name:pause-729494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-729494 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:01:19.534740  176084 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:01:19.537081  176084 out.go:177] * Starting "pause-729494" primary control-plane node in "pause-729494" cluster
	I1028 12:01:19.538595  176084 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:01:19.538641  176084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:01:19.538649  176084 cache.go:56] Caching tarball of preloaded images
	I1028 12:01:19.538750  176084 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:01:19.538776  176084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:01:19.538897  176084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/config.json ...
	I1028 12:01:19.539099  176084 start.go:360] acquireMachinesLock for pause-729494: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:02:02.774198  176084 start.go:364] duration metric: took 43.235065861s to acquireMachinesLock for "pause-729494"
	I1028 12:02:02.774257  176084 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:02:02.774265  176084 fix.go:54] fixHost starting: 
	I1028 12:02:02.774715  176084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:02.774779  176084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:02.792511  176084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1028 12:02:02.793052  176084 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:02.793763  176084 main.go:141] libmachine: Using API Version  1
	I1028 12:02:02.793796  176084 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:02.794250  176084 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:02.794446  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:02.794635  176084 main.go:141] libmachine: (pause-729494) Calling .GetState
	I1028 12:02:02.796381  176084 fix.go:112] recreateIfNeeded on pause-729494: state=Running err=<nil>
	W1028 12:02:02.796403  176084 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:02:02.799598  176084 out.go:177] * Updating the running kvm2 "pause-729494" VM ...
	I1028 12:02:02.801062  176084 machine.go:93] provisionDockerMachine start ...
	I1028 12:02:02.801086  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:02.801291  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:02.804166  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:02.804606  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:02.804640  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:02.804755  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:02.804879  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:02.805001  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:02.805109  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:02.805224  176084 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:02.805435  176084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1028 12:02:02.805451  176084 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:02:02.910576  176084 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-729494
	
	I1028 12:02:02.910603  176084 main.go:141] libmachine: (pause-729494) Calling .GetMachineName
	I1028 12:02:02.910841  176084 buildroot.go:166] provisioning hostname "pause-729494"
	I1028 12:02:02.910869  176084 main.go:141] libmachine: (pause-729494) Calling .GetMachineName
	I1028 12:02:02.911049  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:02.913805  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:02.914101  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:02.914126  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:02.914346  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:02.914512  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:02.914644  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:02.914827  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:02.914984  176084 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:02.915159  176084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1028 12:02:02.915170  176084 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-729494 && echo "pause-729494" | sudo tee /etc/hostname
	I1028 12:02:03.033380  176084 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-729494
	
	I1028 12:02:03.033415  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:03.036416  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.036843  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:03.036888  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.037111  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:03.037288  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:03.037440  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:03.037573  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:03.037766  176084 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:03.037962  176084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1028 12:02:03.037979  176084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-729494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-729494/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-729494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:02:03.138665  176084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:03.138704  176084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:02:03.138729  176084 buildroot.go:174] setting up certificates
	I1028 12:02:03.138741  176084 provision.go:84] configureAuth start
	I1028 12:02:03.138754  176084 main.go:141] libmachine: (pause-729494) Calling .GetMachineName
	I1028 12:02:03.138999  176084 main.go:141] libmachine: (pause-729494) Calling .GetIP
	I1028 12:02:03.141703  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.142030  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:03.142070  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.142233  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:03.144744  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.145100  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:03.145127  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.145302  176084 provision.go:143] copyHostCerts
	I1028 12:02:03.145378  176084 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:02:03.145391  176084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:02:03.145443  176084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:02:03.145558  176084 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:02:03.145569  176084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:02:03.145591  176084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:02:03.145663  176084 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:02:03.145683  176084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:02:03.145716  176084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:02:03.145786  176084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.pause-729494 san=[127.0.0.1 192.168.50.55 localhost minikube pause-729494]
	I1028 12:02:03.295798  176084 provision.go:177] copyRemoteCerts
	I1028 12:02:03.295862  176084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:02:03.295884  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:03.299075  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.299424  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:03.299445  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.299688  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:03.299880  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:03.300021  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:03.300201  176084 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/pause-729494/id_rsa Username:docker}
	I1028 12:02:03.381199  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:02:03.415641  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1028 12:02:03.446227  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:02:03.477367  176084 provision.go:87] duration metric: took 338.605584ms to configureAuth
	I1028 12:02:03.477409  176084 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:02:03.477696  176084 config.go:182] Loaded profile config "pause-729494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:03.477789  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:03.480382  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.480709  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:03.480739  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:03.480947  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:03.481167  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:03.481341  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:03.481492  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:03.481683  176084 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:03.481885  176084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1028 12:02:03.481901  176084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:02:09.113647  176084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:02:09.113681  176084 machine.go:96] duration metric: took 6.312601719s to provisionDockerMachine
	I1028 12:02:09.113698  176084 start.go:293] postStartSetup for "pause-729494" (driver="kvm2")
	I1028 12:02:09.113711  176084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:02:09.113732  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:09.114079  176084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:02:09.114114  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:09.118980  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.119483  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:09.119511  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.119873  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:09.120111  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:09.120289  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:09.120464  176084 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/pause-729494/id_rsa Username:docker}
	I1028 12:02:09.209650  176084 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:02:09.214614  176084 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:02:09.214651  176084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:02:09.214730  176084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:02:09.214832  176084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:02:09.214952  176084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:02:09.227427  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:02:09.263505  176084 start.go:296] duration metric: took 149.789536ms for postStartSetup
	I1028 12:02:09.263556  176084 fix.go:56] duration metric: took 6.48928997s for fixHost
	I1028 12:02:09.263582  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:09.267044  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.267434  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:09.267464  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.267674  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:09.267904  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:09.268093  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:09.268264  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:09.268484  176084 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:09.268753  176084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I1028 12:02:09.268772  176084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:02:09.385447  176084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116929.376838532
	
	I1028 12:02:09.385474  176084 fix.go:216] guest clock: 1730116929.376838532
	I1028 12:02:09.385484  176084 fix.go:229] Guest: 2024-10-28 12:02:09.376838532 +0000 UTC Remote: 2024-10-28 12:02:09.263561806 +0000 UTC m=+49.897380385 (delta=113.276726ms)
	I1028 12:02:09.385512  176084 fix.go:200] guest clock delta is within tolerance: 113.276726ms
	I1028 12:02:09.385519  176084 start.go:83] releasing machines lock for "pause-729494", held for 6.611283894s
	I1028 12:02:09.386202  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:09.386604  176084 main.go:141] libmachine: (pause-729494) Calling .GetIP
	I1028 12:02:09.390490  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.390910  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:09.391070  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.391538  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:09.392265  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:09.392550  176084 main.go:141] libmachine: (pause-729494) Calling .DriverName
	I1028 12:02:09.392654  176084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:02:09.392703  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:09.393069  176084 ssh_runner.go:195] Run: cat /version.json
	I1028 12:02:09.393095  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHHostname
	I1028 12:02:09.396595  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.396987  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:09.397020  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.397039  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.397422  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:09.397602  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:09.397752  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:09.397876  176084 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/pause-729494/id_rsa Username:docker}
	I1028 12:02:09.398639  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:09.398667  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:09.398971  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHPort
	I1028 12:02:09.399158  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHKeyPath
	I1028 12:02:09.399356  176084 main.go:141] libmachine: (pause-729494) Calling .GetSSHUsername
	I1028 12:02:09.399522  176084 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/pause-729494/id_rsa Username:docker}
	I1028 12:02:09.529285  176084 ssh_runner.go:195] Run: systemctl --version
	I1028 12:02:09.536460  176084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:02:09.716318  176084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:02:09.728110  176084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:02:09.728203  176084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:02:09.740685  176084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1028 12:02:09.740721  176084 start.go:495] detecting cgroup driver to use...
	I1028 12:02:09.740797  176084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:02:09.769563  176084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:02:09.805124  176084 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:02:09.805193  176084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:02:09.826196  176084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:02:09.847282  176084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:02:10.023030  176084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:02:10.256543  176084 docker.go:233] disabling docker service ...
	I1028 12:02:10.256630  176084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:02:10.280260  176084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:02:10.296479  176084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:02:10.441661  176084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:02:10.613087  176084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:02:10.630517  176084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:02:10.653549  176084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:02:10.653635  176084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.665461  176084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:02:10.665560  176084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.681456  176084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.694718  176084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.711251  176084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:02:10.727597  176084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.744204  176084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.758806  176084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:10.770653  176084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:02:10.783956  176084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:02:10.794925  176084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:10.961303  176084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:02:16.711863  176084 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.750513571s)
	I1028 12:02:16.711903  176084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:02:16.711976  176084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:02:16.717346  176084 start.go:563] Will wait 60s for crictl version
	I1028 12:02:16.717416  176084 ssh_runner.go:195] Run: which crictl
	I1028 12:02:16.722873  176084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:02:16.777505  176084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:02:16.777604  176084 ssh_runner.go:195] Run: crio --version
	I1028 12:02:16.818050  176084 ssh_runner.go:195] Run: crio --version
	I1028 12:02:16.854107  176084 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:02:16.855529  176084 main.go:141] libmachine: (pause-729494) Calling .GetIP
	I1028 12:02:16.858852  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:16.859209  176084 main.go:141] libmachine: (pause-729494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:b6:9d", ip: ""} in network mk-pause-729494: {Iface:virbr2 ExpiryTime:2024-10-28 13:00:39 +0000 UTC Type:0 Mac:52:54:00:f4:b6:9d Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:pause-729494 Clientid:01:52:54:00:f4:b6:9d}
	I1028 12:02:16.859235  176084 main.go:141] libmachine: (pause-729494) DBG | domain pause-729494 has defined IP address 192.168.50.55 and MAC address 52:54:00:f4:b6:9d in network mk-pause-729494
	I1028 12:02:16.859484  176084 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:02:16.864380  176084 kubeadm.go:883] updating cluster {Name:pause-729494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-729494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:02:16.864594  176084 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:02:16.864660  176084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:02:16.912848  176084 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:02:16.912879  176084 crio.go:433] Images already preloaded, skipping extraction
	I1028 12:02:16.912947  176084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:02:16.954863  176084 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:02:16.954893  176084 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:02:16.954903  176084 kubeadm.go:934] updating node { 192.168.50.55 8443 v1.31.2 crio true true} ...
	I1028 12:02:16.955034  176084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-729494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-729494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:02:16.955133  176084 ssh_runner.go:195] Run: crio config
	I1028 12:02:17.022584  176084 cni.go:84] Creating CNI manager for ""
	I1028 12:02:17.022615  176084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:02:17.022628  176084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:02:17.022663  176084 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.55 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-729494 NodeName:pause-729494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:02:17.022840  176084 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-729494"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.55"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.55"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:02:17.022915  176084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:02:17.034460  176084 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:02:17.034541  176084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:02:17.045678  176084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1028 12:02:17.064331  176084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:02:17.082655  176084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1028 12:02:17.104681  176084 ssh_runner.go:195] Run: grep 192.168.50.55	control-plane.minikube.internal$ /etc/hosts
	I1028 12:02:17.109317  176084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:17.246479  176084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:02:17.264811  176084 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494 for IP: 192.168.50.55
	I1028 12:02:17.264837  176084 certs.go:194] generating shared ca certs ...
	I1028 12:02:17.264857  176084 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:17.265046  176084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:02:17.265114  176084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:02:17.265127  176084 certs.go:256] generating profile certs ...
	I1028 12:02:17.265234  176084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.key
	I1028 12:02:17.265313  176084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/apiserver.key.d97dab7b
	I1028 12:02:17.265387  176084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/proxy-client.key
	I1028 12:02:17.265561  176084 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:02:17.265614  176084 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:02:17.265630  176084 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:02:17.265661  176084 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:02:17.265707  176084 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:02:17.265741  176084 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:02:17.265797  176084 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:02:17.266653  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:02:17.297776  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:02:17.327743  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:02:17.357661  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:02:17.386800  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1028 12:02:17.415407  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:02:17.444744  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:02:17.472715  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:02:17.501449  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:02:17.530470  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:02:17.558797  176084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:02:17.584901  176084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:02:17.680131  176084 ssh_runner.go:195] Run: openssl version
	I1028 12:02:17.762953  176084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:02:17.908094  176084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:02:17.941599  176084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:02:17.941699  176084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:02:18.003516  176084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:02:18.086193  176084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:02:18.184367  176084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:02:18.210330  176084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:02:18.210408  176084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:02:18.275987  176084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:02:18.335319  176084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:02:18.436766  176084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:18.480500  176084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:18.480579  176084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:18.530233  176084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:02:18.570442  176084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:02:18.581787  176084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:02:18.594595  176084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:02:18.607136  176084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:02:18.617846  176084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:02:18.633695  176084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:02:18.640848  176084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:02:18.647651  176084 kubeadm.go:392] StartCluster: {Name:pause-729494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-729494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugi
n:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:18.647803  176084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:02:18.647885  176084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:02:18.768024  176084 cri.go:89] found id: "ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff"
	I1028 12:02:18.768065  176084 cri.go:89] found id: "5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce"
	I1028 12:02:18.768071  176084 cri.go:89] found id: "b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7"
	I1028 12:02:18.768077  176084 cri.go:89] found id: "02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f"
	I1028 12:02:18.768081  176084 cri.go:89] found id: "8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39"
	I1028 12:02:18.768086  176084 cri.go:89] found id: "128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18"
	I1028 12:02:18.768090  176084 cri.go:89] found id: "25240760d8f7ba10cdc2b985cfbc25a318371a7b641cb1e8fe6157d73691d13b"
	I1028 12:02:18.768094  176084 cri.go:89] found id: "878d7f776688c1b78f81b25b161d6332ce676d48713c8d6331de4d266a02f866"
	I1028 12:02:18.768100  176084 cri.go:89] found id: "636d5c3d7edae0f22bddd359ac7abc0c6a5ef972e91bcf033ac85d60cda22119"
	I1028 12:02:18.768109  176084 cri.go:89] found id: "463f1c2e2c4bdc8e01b2a0c6014e83def56c9f66a29107e90ac34ec0aff21114"
	I1028 12:02:18.768114  176084 cri.go:89] found id: ""
	I1028 12:02:18.768172  176084 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-729494 -n pause-729494
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-729494 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-729494 logs -n 25: (1.779186214s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo docker                         | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo find                           | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo crio                           | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-903216                                     | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC | 28 Oct 24 12:02 UTC |
	| start   | -p NoKubernetes-606176                               | NoKubernetes-606176    | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --no-kubernetes                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=1.20                            |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-606176                               | NoKubernetes-606176    | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| stop    | stopped-upgrade-755815 stop                          | minikube               | jenkins | v1.26.0 | 28 Oct 24 12:02 UTC | 28 Oct 24 12:02 UTC |
	| start   | -p stopped-upgrade-755815                            | stopped-upgrade-755815 | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p running-upgrade-628680                            | running-upgrade-628680 | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:02:32
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:02:32.118572  179097 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:02:32.118822  179097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:32.118832  179097 out.go:358] Setting ErrFile to fd 2...
	I1028 12:02:32.118836  179097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:32.119018  179097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:02:32.119559  179097 out.go:352] Setting JSON to false
	I1028 12:02:32.120630  179097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6295,"bootTime":1730110657,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:02:32.120737  179097 start.go:139] virtualization: kvm guest
	I1028 12:02:32.122955  179097 out.go:177] * [running-upgrade-628680] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:02:32.124482  179097 notify.go:220] Checking for updates...
	I1028 12:02:32.124490  179097 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:02:32.125676  179097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:02:32.127379  179097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:02:32.128965  179097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:02:32.130315  179097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:02:32.131573  179097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:02:32.133473  179097 config.go:182] Loaded profile config "running-upgrade-628680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:02:32.134159  179097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:32.134239  179097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:32.152055  179097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I1028 12:02:32.152579  179097 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:32.153075  179097 main.go:141] libmachine: Using API Version  1
	I1028 12:02:32.153104  179097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:32.153440  179097 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:32.153638  179097 main.go:141] libmachine: (running-upgrade-628680) Calling .DriverName
	I1028 12:02:32.155390  179097 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 12:02:32.156782  179097 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:02:32.157067  179097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:32.157105  179097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:32.172331  179097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I1028 12:02:32.172766  179097 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:32.173298  179097 main.go:141] libmachine: Using API Version  1
	I1028 12:02:32.173337  179097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:32.173730  179097 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:32.173901  179097 main.go:141] libmachine: (running-upgrade-628680) Calling .DriverName
	I1028 12:02:32.211280  179097 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:02:32.212577  179097 start.go:297] selected driver: kvm2
	I1028 12:02:32.212593  179097 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-628680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-628
680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 12:02:32.212709  179097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:02:32.213402  179097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:32.213486  179097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:02:32.229628  179097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:02:32.230089  179097 cni.go:84] Creating CNI manager for ""
	I1028 12:02:32.230151  179097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:02:32.230206  179097 start.go:340] cluster config:
	{Name:running-upgrade-628680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-628680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 12:02:32.230310  179097 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:32.232290  179097 out.go:177] * Starting "running-upgrade-628680" primary control-plane node in "running-upgrade-628680" cluster
	I1028 12:02:34.186711  178984 start.go:364] duration metric: took 11.91470295s to acquireMachinesLock for "stopped-upgrade-755815"
	I1028 12:02:34.186777  178984 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:02:34.186787  178984 fix.go:54] fixHost starting: 
	I1028 12:02:34.187213  178984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:34.187267  178984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:34.205010  178984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I1028 12:02:34.205446  178984 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:34.205932  178984 main.go:141] libmachine: Using API Version  1
	I1028 12:02:34.205958  178984 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:34.206325  178984 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:34.206520  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .DriverName
	I1028 12:02:34.206682  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .GetState
	I1028 12:02:34.208429  178984 fix.go:112] recreateIfNeeded on stopped-upgrade-755815: state=Stopped err=<nil>
	I1028 12:02:34.208457  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .DriverName
	W1028 12:02:34.208607  178984 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:02:34.210610  178984 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-755815" ...
	I1028 12:02:30.900578  176084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:02:30.913781  176084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:02:30.934017  176084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:02:30.934114  176084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 12:02:30.934136  176084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 12:02:30.947953  176084 system_pods.go:59] 6 kube-system pods found
	I1028 12:02:30.947990  176084 system_pods.go:61] "coredns-7c65d6cfc9-2x9sx" [6c991e2e-d7bc-4aee-a537-4885075a5453] Running
	I1028 12:02:30.948002  176084 system_pods.go:61] "etcd-pause-729494" [f3e0e1e4-25c5-4343-8256-24c8080d6f9b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:02:30.948012  176084 system_pods.go:61] "kube-apiserver-pause-729494" [47a5b86a-6abd-42f5-86bb-cec0d357827c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:02:30.948022  176084 system_pods.go:61] "kube-controller-manager-pause-729494" [cdfc5376-eb2e-46ae-a83d-bbfeddb8319c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:02:30.948035  176084 system_pods.go:61] "kube-proxy-nllwf" [e08aea94-206d-4bec-96b4-8fb7703efeda] Running
	I1028 12:02:30.948048  176084 system_pods.go:61] "kube-scheduler-pause-729494" [6666e789-7fb0-4bd4-bc83-9228d9aa987d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:02:30.948056  176084 system_pods.go:74] duration metric: took 14.012984ms to wait for pod list to return data ...
	I1028 12:02:30.948066  176084 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:02:30.953672  176084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:02:30.953701  176084 node_conditions.go:123] node cpu capacity is 2
	I1028 12:02:30.953713  176084 node_conditions.go:105] duration metric: took 5.640082ms to run NodePressure ...
	I1028 12:02:30.953734  176084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:02:31.223239  176084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:02:31.227689  176084 kubeadm.go:739] kubelet initialised
	I1028 12:02:31.227716  176084 kubeadm.go:740] duration metric: took 4.442639ms waiting for restarted kubelet to initialise ...
	I1028 12:02:31.227727  176084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:31.231928  176084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:31.237192  176084 pod_ready.go:93] pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:31.237229  176084 pod_ready.go:82] duration metric: took 5.273291ms for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:31.237243  176084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:33.243840  176084 pod_ready.go:103] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:32.738576  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.738948  178661 main.go:141] libmachine: (NoKubernetes-606176) Found IP for machine: 192.168.61.189
	I1028 12:02:32.738961  178661 main.go:141] libmachine: (NoKubernetes-606176) Reserving static IP address...
	I1028 12:02:32.738973  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has current primary IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.739272  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-606176", mac: "52:54:00:44:0a:13", ip: "192.168.61.189"} in network mk-NoKubernetes-606176
	I1028 12:02:32.819412  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Getting to WaitForSSH function...
	I1028 12:02:32.819428  178661 main.go:141] libmachine: (NoKubernetes-606176) Reserved static IP address: 192.168.61.189
	I1028 12:02:32.819467  178661 main.go:141] libmachine: (NoKubernetes-606176) Waiting for SSH to be available...
	I1028 12:02:32.822128  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.822580  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:32.822604  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.822751  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Using SSH client type: external
	I1028 12:02:32.822774  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa (-rw-------)
	I1028 12:02:32.822800  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:02:32.822812  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | About to run SSH command:
	I1028 12:02:32.822834  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | exit 0
	I1028 12:02:32.953854  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | SSH cmd err, output: <nil>: 
	I1028 12:02:32.954184  178661 main.go:141] libmachine: (NoKubernetes-606176) KVM machine creation complete!
	I1028 12:02:32.954495  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetConfigRaw
	I1028 12:02:32.955085  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:32.955256  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:32.955433  178661 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:02:32.955442  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetState
	I1028 12:02:32.956861  178661 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:02:32.956868  178661 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:02:32.956872  178661 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:02:32.956876  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:32.959356  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.959708  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:32.959730  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.959878  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:32.960032  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:32.960191  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:32.960316  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:32.960443  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:32.960694  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:32.960700  178661 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:02:33.061207  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:33.061220  178661 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:02:33.061247  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.064317  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.064661  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.064683  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.064802  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.064983  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.065094  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.065204  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.065354  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.065560  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.065568  178661 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:02:33.170474  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:02:33.170535  178661 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:02:33.170539  178661 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:02:33.170545  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetMachineName
	I1028 12:02:33.170769  178661 buildroot.go:166] provisioning hostname "NoKubernetes-606176"
	I1028 12:02:33.170785  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetMachineName
	I1028 12:02:33.170894  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.173437  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.173847  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.173882  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.174009  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.174195  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.174353  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.174472  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.174591  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.174761  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.174767  178661 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-606176 && echo "NoKubernetes-606176" | sudo tee /etc/hostname
	I1028 12:02:33.290856  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-606176
	
	I1028 12:02:33.290877  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.293996  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.294383  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.294399  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.294582  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.294769  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.294923  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.295075  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.295274  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.295447  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.295457  178661 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-606176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-606176/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-606176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:02:33.407173  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:33.407192  178661 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:02:33.407219  178661 buildroot.go:174] setting up certificates
	I1028 12:02:33.407228  178661 provision.go:84] configureAuth start
	I1028 12:02:33.407236  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetMachineName
	I1028 12:02:33.407477  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:33.410477  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.410828  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.410844  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.411007  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.413106  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.413431  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.413454  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.413560  178661 provision.go:143] copyHostCerts
	I1028 12:02:33.413633  178661 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:02:33.413649  178661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:02:33.413701  178661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:02:33.413793  178661 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:02:33.413796  178661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:02:33.413814  178661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:02:33.413879  178661 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:02:33.413882  178661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:02:33.413897  178661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:02:33.413950  178661 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-606176 san=[127.0.0.1 192.168.61.189 NoKubernetes-606176 localhost minikube]
	I1028 12:02:33.551586  178661 provision.go:177] copyRemoteCerts
	I1028 12:02:33.551628  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:02:33.551650  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.554105  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.554404  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.554443  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.554632  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.554787  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.554889  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.554980  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:33.637012  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:02:33.663728  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:02:33.689940  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:02:33.715667  178661 provision.go:87] duration metric: took 308.426393ms to configureAuth
	I1028 12:02:33.715685  178661 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:02:33.715838  178661 config.go:182] Loaded profile config "NoKubernetes-606176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:33.715895  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.718464  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.718767  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.718788  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.718940  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.719107  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.719235  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.719369  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.719486  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.719734  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.719749  178661 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:02:33.946740  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:02:33.946772  178661 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:02:33.946779  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetURL
	I1028 12:02:33.948250  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Using libvirt version 6000000
	I1028 12:02:33.951035  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.951451  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.951478  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.951610  178661 main.go:141] libmachine: Docker is up and running!
	I1028 12:02:33.951619  178661 main.go:141] libmachine: Reticulating splines...
	I1028 12:02:33.951626  178661 client.go:171] duration metric: took 24.539769855s to LocalClient.Create
	I1028 12:02:33.951646  178661 start.go:167] duration metric: took 24.539836755s to libmachine.API.Create "NoKubernetes-606176"
	I1028 12:02:33.951652  178661 start.go:293] postStartSetup for "NoKubernetes-606176" (driver="kvm2")
	I1028 12:02:33.951664  178661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:02:33.951697  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:33.951941  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:02:33.951956  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.954397  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.954724  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.954747  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.954925  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.955101  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.955278  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.955425  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:34.036990  178661 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:02:34.041614  178661 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:02:34.041646  178661 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:02:34.041712  178661 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:02:34.041777  178661 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:02:34.041857  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:02:34.052080  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:02:34.076434  178661 start.go:296] duration metric: took 124.770191ms for postStartSetup
	I1028 12:02:34.076474  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetConfigRaw
	I1028 12:02:34.077092  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:34.079831  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.080259  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.080287  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.080553  178661 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/config.json ...
	I1028 12:02:34.080763  178661 start.go:128] duration metric: took 24.694949268s to createHost
	I1028 12:02:34.080797  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:34.083156  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.083447  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.083469  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.083633  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:34.083795  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.083943  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.084079  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:34.084233  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:34.084391  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:34.084395  178661 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:02:34.186567  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116954.157685888
	
	I1028 12:02:34.186583  178661 fix.go:216] guest clock: 1730116954.157685888
	I1028 12:02:34.186591  178661 fix.go:229] Guest: 2024-10-28 12:02:34.157685888 +0000 UTC Remote: 2024-10-28 12:02:34.080768965 +0000 UTC m=+33.421890115 (delta=76.916923ms)
	I1028 12:02:34.186614  178661 fix.go:200] guest clock delta is within tolerance: 76.916923ms
	I1028 12:02:34.186630  178661 start.go:83] releasing machines lock for "NoKubernetes-606176", held for 24.800997268s
	I1028 12:02:34.186656  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.186889  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:34.190459  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.190803  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.190831  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.191084  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.191622  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.191815  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.191914  178661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:02:34.191947  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:34.192044  178661 ssh_runner.go:195] Run: cat /version.json
	I1028 12:02:34.192059  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:34.194935  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195311  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.195331  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195349  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195538  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:34.195693  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.195805  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.195830  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195855  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:34.195980  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:34.196050  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:34.196118  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.196229  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:34.196361  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:34.295944  178661 ssh_runner.go:195] Run: systemctl --version
	I1028 12:02:34.303864  178661 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:02:34.468050  178661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:02:34.475328  178661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:02:34.475392  178661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:02:34.492679  178661 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:02:34.492696  178661 start.go:495] detecting cgroup driver to use...
	I1028 12:02:34.492769  178661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:02:34.510630  178661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:02:34.526406  178661 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:02:34.526451  178661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:02:34.541808  178661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:02:34.561402  178661 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:02:34.692021  178661 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:02:34.843715  178661 docker.go:233] disabling docker service ...
	I1028 12:02:34.843773  178661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:02:34.860083  178661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:02:34.876629  178661 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:02:35.027920  178661 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:02:35.161409  178661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:02:35.176700  178661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:02:35.203638  178661 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:02:35.203715  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.217364  178661 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:02:35.217420  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.228651  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.240181  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.252876  178661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:02:35.265493  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.277733  178661 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.298891  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.310519  178661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:02:35.320995  178661 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:02:35.321088  178661 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:02:35.335972  178661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:02:35.348312  178661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:35.505928  178661 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:02:35.615915  178661 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:02:35.615971  178661 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:02:35.621965  178661 start.go:563] Will wait 60s for crictl version
	I1028 12:02:35.622020  178661 ssh_runner.go:195] Run: which crictl
	I1028 12:02:35.626173  178661 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:02:35.666940  178661 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:02:35.667007  178661 ssh_runner.go:195] Run: crio --version
	I1028 12:02:35.697950  178661 ssh_runner.go:195] Run: crio --version
	I1028 12:02:35.738815  178661 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:02:32.233588  179097 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1028 12:02:32.233635  179097 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I1028 12:02:32.233650  179097 cache.go:56] Caching tarball of preloaded images
	I1028 12:02:32.233773  179097 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:02:32.233789  179097 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I1028 12:02:32.233876  179097 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/running-upgrade-628680/config.json ...
	I1028 12:02:32.234077  179097 start.go:360] acquireMachinesLock for running-upgrade-628680: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:02:34.211954  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .Start
	I1028 12:02:34.212149  178984 main.go:141] libmachine: (stopped-upgrade-755815) Ensuring networks are active...
	I1028 12:02:34.212890  178984 main.go:141] libmachine: (stopped-upgrade-755815) Ensuring network default is active
	I1028 12:02:34.213280  178984 main.go:141] libmachine: (stopped-upgrade-755815) Ensuring network mk-stopped-upgrade-755815 is active
	I1028 12:02:34.213722  178984 main.go:141] libmachine: (stopped-upgrade-755815) Getting domain xml...
	I1028 12:02:34.214521  178984 main.go:141] libmachine: (stopped-upgrade-755815) Creating domain...
	I1028 12:02:35.533435  178984 main.go:141] libmachine: (stopped-upgrade-755815) Waiting to get IP...
	I1028 12:02:35.534355  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:35.534908  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:35.534968  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:35.534887  179174 retry.go:31] will retry after 212.195899ms: waiting for machine to come up
	I1028 12:02:35.748022  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:35.748493  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:35.748525  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:35.748438  179174 retry.go:31] will retry after 386.090397ms: waiting for machine to come up
	I1028 12:02:36.136454  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:36.137296  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:36.137335  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:36.137248  179174 retry.go:31] will retry after 345.767506ms: waiting for machine to come up
	I1028 12:02:36.485093  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:36.485694  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:36.485721  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:36.485647  179174 retry.go:31] will retry after 554.902566ms: waiting for machine to come up
	I1028 12:02:37.042252  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:37.042943  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:37.042981  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:37.042864  179174 retry.go:31] will retry after 483.556813ms: waiting for machine to come up
	I1028 12:02:35.246674  176084 pod_ready.go:103] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:37.259452  176084 pod_ready.go:103] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:38.244577  176084 pod_ready.go:93] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.244603  176084 pod_ready.go:82] duration metric: took 7.007352707s for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.244613  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.755458  176084 pod_ready.go:93] pod "kube-apiserver-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.755490  176084 pod_ready.go:82] duration metric: took 510.868739ms for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.755506  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.765422  176084 pod_ready.go:93] pod "kube-controller-manager-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.765449  176084 pod_ready.go:82] duration metric: took 9.933425ms for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.765461  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.772415  176084 pod_ready.go:93] pod "kube-proxy-nllwf" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.772434  176084 pod_ready.go:82] duration metric: took 6.966069ms for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.772443  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:35.740536  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:35.744346  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:35.744800  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:35.744817  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:35.745050  178661 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:02:35.750982  178661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:02:35.765870  178661 kubeadm.go:883] updating cluster {Name:NoKubernetes-606176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.2 ClusterName:NoKubernetes-606176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.189 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:02:35.765965  178661 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:02:35.766008  178661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:02:35.804023  178661 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:02:35.804107  178661 ssh_runner.go:195] Run: which lz4
	I1028 12:02:35.808986  178661 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:02:35.813680  178661 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:02:35.813713  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:02:37.494165  178661 crio.go:462] duration metric: took 1.685216387s to copy over tarball
	I1028 12:02:37.494239  178661 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:02:39.889167  178661 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.394897638s)
	I1028 12:02:39.889185  178661 crio.go:469] duration metric: took 2.394999007s to extract the tarball
	I1028 12:02:39.889192  178661 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:02:39.927396  178661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:02:39.975894  178661 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:02:39.975907  178661 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:02:39.975914  178661 kubeadm.go:934] updating node { 192.168.61.189 8443 v1.31.2 crio true true} ...
	I1028 12:02:39.976003  178661 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=NoKubernetes-606176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:NoKubernetes-606176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:02:39.976077  178661 ssh_runner.go:195] Run: crio config
	I1028 12:02:40.036739  178661 cni.go:84] Creating CNI manager for ""
	I1028 12:02:40.036750  178661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:02:40.036759  178661 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:02:40.036781  178661 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.189 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-606176 NodeName:NoKubernetes-606176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:02:40.036924  178661 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "NoKubernetes-606176"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:02:40.036993  178661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:02:40.050860  178661 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:02:40.050926  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:02:40.060771  178661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1028 12:02:40.080031  178661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:02:40.097386  178661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I1028 12:02:40.115666  178661 ssh_runner.go:195] Run: grep 192.168.61.189	control-plane.minikube.internal$ /etc/hosts
	I1028 12:02:40.120706  178661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:02:40.135110  178661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:40.292584  178661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:02:40.312060  178661 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176 for IP: 192.168.61.189
	I1028 12:02:40.312082  178661 certs.go:194] generating shared ca certs ...
	I1028 12:02:40.312096  178661 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.312292  178661 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:02:40.312345  178661 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:02:40.312353  178661 certs.go:256] generating profile certs ...
	I1028 12:02:40.312420  178661 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.key
	I1028 12:02:40.312436  178661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.crt with IP's: []
	I1028 12:02:40.533096  178661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.crt ...
	I1028 12:02:40.533112  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.crt: {Name:mk42ccbff3b47f2e90827522ac56f68ab696f8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.533324  178661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.key ...
	I1028 12:02:40.533337  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.key: {Name:mk6b971a9976c6de0a9371708b6a00a2c8713fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.534036  178661 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc
	I1028 12:02:40.534050  178661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.189]
	I1028 12:02:40.624366  178661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc ...
	I1028 12:02:40.624381  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc: {Name:mk023ffd5739f4e569c2704597c4ebc85a39b116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.624549  178661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc ...
	I1028 12:02:40.624556  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc: {Name:mkd9458ceffcfc7d28272577de72f60fa124ae1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.624625  178661 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt
	I1028 12:02:40.624696  178661 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key
	I1028 12:02:40.624740  178661 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key
	I1028 12:02:40.624751  178661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt with IP's: []
	I1028 12:02:37.528438  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:37.528899  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:37.528922  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:37.528858  179174 retry.go:31] will retry after 826.387192ms: waiting for machine to come up
	I1028 12:02:38.357097  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:38.357568  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:38.357624  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:38.357551  179174 retry.go:31] will retry after 768.995626ms: waiting for machine to come up
	I1028 12:02:39.128387  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:39.128967  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:39.128995  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:39.128925  179174 retry.go:31] will retry after 943.551295ms: waiting for machine to come up
	I1028 12:02:40.074186  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:40.074689  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:40.074720  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:40.074636  179174 retry.go:31] will retry after 1.137013569s: waiting for machine to come up
	I1028 12:02:41.212978  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:41.213570  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:41.213597  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:41.213484  179174 retry.go:31] will retry after 1.981073277s: waiting for machine to come up
	I1028 12:02:40.780856  176084 pod_ready.go:103] pod "kube-scheduler-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:43.280872  176084 pod_ready.go:93] pod "kube-scheduler-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.280903  176084 pod_ready.go:82] duration metric: took 4.508453455s for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.280914  176084 pod_ready.go:39] duration metric: took 12.053174948s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:43.280938  176084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:02:43.305816  176084 ops.go:34] apiserver oom_adj: -16
	I1028 12:02:43.305847  176084 kubeadm.go:597] duration metric: took 24.449261721s to restartPrimaryControlPlane
	I1028 12:02:43.305861  176084 kubeadm.go:394] duration metric: took 24.658223087s to StartCluster
	I1028 12:02:43.305883  176084 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:43.305970  176084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:02:43.306814  176084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:43.307057  176084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:02:43.307240  176084 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:02:43.307724  176084 config.go:182] Loaded profile config "pause-729494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:43.308723  176084 out.go:177] * Enabled addons: 
	I1028 12:02:43.308737  176084 out.go:177] * Verifying Kubernetes components...
	I1028 12:02:43.310769  176084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:43.310941  176084 addons.go:510] duration metric: took 3.713774ms for enable addons: enabled=[]
	I1028 12:02:43.585407  176084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:02:43.610233  176084 node_ready.go:35] waiting up to 6m0s for node "pause-729494" to be "Ready" ...
	I1028 12:02:43.614943  176084 node_ready.go:49] node "pause-729494" has status "Ready":"True"
	I1028 12:02:43.614982  176084 node_ready.go:38] duration metric: took 4.711656ms for node "pause-729494" to be "Ready" ...
	I1028 12:02:43.614995  176084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:43.624599  176084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.634255  176084 pod_ready.go:93] pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.634292  176084 pod_ready.go:82] duration metric: took 9.59593ms for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.634308  176084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.640959  176084 pod_ready.go:93] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.640986  176084 pod_ready.go:82] duration metric: took 6.669647ms for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.640999  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.840907  176084 pod_ready.go:93] pod "kube-apiserver-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.840938  176084 pod_ready.go:82] duration metric: took 199.93075ms for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.840953  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:44.242258  176084 pod_ready.go:93] pod "kube-controller-manager-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:44.242288  176084 pod_ready.go:82] duration metric: took 401.324613ms for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:44.242301  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:40.740435  178661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt ...
	I1028 12:02:40.740449  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt: {Name:mka461f0bd7149c619305492cab62b49f2cfc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.740623  178661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key ...
	I1028 12:02:40.740632  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key: {Name:mkaa73e54bed3cf848ca71bdcd979b6e50b24313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.740799  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:02:40.740831  178661 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:02:40.740837  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:02:40.740859  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:02:40.740876  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:02:40.740895  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:02:40.740927  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:02:40.741571  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:02:40.780410  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:02:40.817454  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:02:40.849490  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:02:40.880074  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:02:40.907691  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:02:40.935695  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:02:40.964319  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:02:40.998533  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:02:41.039107  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:02:41.073550  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:02:41.099726  178661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:02:41.119496  178661 ssh_runner.go:195] Run: openssl version
	I1028 12:02:41.128267  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:02:41.141798  178661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:41.147071  178661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:41.147132  178661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:41.153744  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:02:41.166333  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:02:41.178826  178661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:02:41.184044  178661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:02:41.184105  178661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:02:41.190588  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:02:41.203959  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:02:41.217766  178661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:02:41.222881  178661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:02:41.222940  178661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:02:41.229354  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:02:41.243654  178661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:02:41.249226  178661 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:02:41.249283  178661 kubeadm.go:392] StartCluster: {Name:NoKubernetes-606176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:NoKubernetes-606176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.189 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:41.249366  178661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:02:41.249410  178661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:02:41.296980  178661 cri.go:89] found id: ""
	I1028 12:02:41.297059  178661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:02:41.308451  178661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:02:41.320583  178661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:02:41.333395  178661 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:02:41.333405  178661 kubeadm.go:157] found existing configuration files:
	
	I1028 12:02:41.333461  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:02:41.344881  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:02:41.344952  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:02:41.356519  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:02:41.367239  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:02:41.367325  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:02:41.378250  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:02:41.390443  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:02:41.390513  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:02:41.402575  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:02:41.414288  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:02:41.414334  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:02:41.424933  178661 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:02:41.609002  178661 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:02:44.642979  176084 pod_ready.go:93] pod "kube-proxy-nllwf" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:44.643012  176084 pod_ready.go:82] duration metric: took 400.70208ms for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:44.643028  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:45.044821  176084 pod_ready.go:93] pod "kube-scheduler-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:45.044856  176084 pod_ready.go:82] duration metric: took 401.818535ms for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:45.044868  176084 pod_ready.go:39] duration metric: took 1.429859372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:45.044890  176084 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:02:45.044956  176084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:02:45.066376  176084 api_server.go:72] duration metric: took 1.759277156s to wait for apiserver process to appear ...
	I1028 12:02:45.066413  176084 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:02:45.066443  176084 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1028 12:02:45.075141  176084 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I1028 12:02:45.077401  176084 api_server.go:141] control plane version: v1.31.2
	I1028 12:02:45.077430  176084 api_server.go:131] duration metric: took 11.007517ms to wait for apiserver health ...
	I1028 12:02:45.077442  176084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:02:45.245247  176084 system_pods.go:59] 6 kube-system pods found
	I1028 12:02:45.245291  176084 system_pods.go:61] "coredns-7c65d6cfc9-2x9sx" [6c991e2e-d7bc-4aee-a537-4885075a5453] Running
	I1028 12:02:45.245300  176084 system_pods.go:61] "etcd-pause-729494" [f3e0e1e4-25c5-4343-8256-24c8080d6f9b] Running
	I1028 12:02:45.245306  176084 system_pods.go:61] "kube-apiserver-pause-729494" [47a5b86a-6abd-42f5-86bb-cec0d357827c] Running
	I1028 12:02:45.245311  176084 system_pods.go:61] "kube-controller-manager-pause-729494" [cdfc5376-eb2e-46ae-a83d-bbfeddb8319c] Running
	I1028 12:02:45.245317  176084 system_pods.go:61] "kube-proxy-nllwf" [e08aea94-206d-4bec-96b4-8fb7703efeda] Running
	I1028 12:02:45.245322  176084 system_pods.go:61] "kube-scheduler-pause-729494" [6666e789-7fb0-4bd4-bc83-9228d9aa987d] Running
	I1028 12:02:45.245330  176084 system_pods.go:74] duration metric: took 167.879776ms to wait for pod list to return data ...
	I1028 12:02:45.245339  176084 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:02:45.441748  176084 default_sa.go:45] found service account: "default"
	I1028 12:02:45.441810  176084 default_sa.go:55] duration metric: took 196.461049ms for default service account to be created ...
	I1028 12:02:45.441825  176084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:02:45.645363  176084 system_pods.go:86] 6 kube-system pods found
	I1028 12:02:45.645401  176084 system_pods.go:89] "coredns-7c65d6cfc9-2x9sx" [6c991e2e-d7bc-4aee-a537-4885075a5453] Running
	I1028 12:02:45.645409  176084 system_pods.go:89] "etcd-pause-729494" [f3e0e1e4-25c5-4343-8256-24c8080d6f9b] Running
	I1028 12:02:45.645415  176084 system_pods.go:89] "kube-apiserver-pause-729494" [47a5b86a-6abd-42f5-86bb-cec0d357827c] Running
	I1028 12:02:45.645421  176084 system_pods.go:89] "kube-controller-manager-pause-729494" [cdfc5376-eb2e-46ae-a83d-bbfeddb8319c] Running
	I1028 12:02:45.645427  176084 system_pods.go:89] "kube-proxy-nllwf" [e08aea94-206d-4bec-96b4-8fb7703efeda] Running
	I1028 12:02:45.645440  176084 system_pods.go:89] "kube-scheduler-pause-729494" [6666e789-7fb0-4bd4-bc83-9228d9aa987d] Running
	I1028 12:02:45.645450  176084 system_pods.go:126] duration metric: took 203.614298ms to wait for k8s-apps to be running ...
	I1028 12:02:45.645466  176084 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:02:45.645520  176084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:02:45.665674  176084 system_svc.go:56] duration metric: took 20.187979ms WaitForService to wait for kubelet
	I1028 12:02:45.665715  176084 kubeadm.go:582] duration metric: took 2.358625746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:02:45.665741  176084 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:02:45.842608  176084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:02:45.842640  176084 node_conditions.go:123] node cpu capacity is 2
	I1028 12:02:45.842655  176084 node_conditions.go:105] duration metric: took 176.908393ms to run NodePressure ...
	I1028 12:02:45.842670  176084 start.go:241] waiting for startup goroutines ...
	I1028 12:02:45.842679  176084 start.go:246] waiting for cluster config update ...
	I1028 12:02:45.842690  176084 start.go:255] writing updated cluster config ...
	I1028 12:02:45.843035  176084 ssh_runner.go:195] Run: rm -f paused
	I1028 12:02:45.905871  176084 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:02:45.908309  176084 out.go:177] * Done! kubectl is now configured to use "pause-729494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.788848767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116966788807013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e89d6e4-90db-4524-975c-e6dce87d93c7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.789719996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3c006a1-a502-4d76-9276-4423ee4563db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.789827302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3c006a1-a502-4d76-9276-4423ee4563db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.790342698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3c006a1-a502-4d76-9276-4423ee4563db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.850005716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee1044d0-6052-4e7d-9df5-941a7dd82e63 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.850091418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee1044d0-6052-4e7d-9df5-941a7dd82e63 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.854633626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cd9453a-a6e7-4b14-b594-439298a4e671 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.855493032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116966855427355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cd9453a-a6e7-4b14-b594-439298a4e671 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.856608147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=030e22cf-d4b9-401b-ab23-83ce2401f272 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.856708136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=030e22cf-d4b9-401b-ab23-83ce2401f272 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.857140303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=030e22cf-d4b9-401b-ab23-83ce2401f272 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.917804643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fb50eb1-fb5f-404d-a361-d957d35a8c10 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.918036365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fb50eb1-fb5f-404d-a361-d957d35a8c10 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.919695504Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db2471c9-6910-4919-974d-b2e9026a433e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.920529329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116966920487738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db2471c9-6910-4919-974d-b2e9026a433e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.921308965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3b2e70e-072b-431d-a9fa-42ab7753e9a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.921391322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3b2e70e-072b-431d-a9fa-42ab7753e9a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.921744325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3b2e70e-072b-431d-a9fa-42ab7753e9a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.975053730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09f1a1cb-17ea-4fbb-be63-f50d943b2700 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.975176454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09f1a1cb-17ea-4fbb-be63-f50d943b2700 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.978125731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e77d458-d003-4a5a-a8af-f171b58b03ea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.978607288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116966978576948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e77d458-d003-4a5a-a8af-f171b58b03ea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.979441462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=702ba6f0-182b-44d1-b1cc-85732b09b2ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.979547207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=702ba6f0-182b-44d1-b1cc-85732b09b2ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:46 pause-729494 crio[2087]: time="2024-10-28 12:02:46.979997194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=702ba6f0-182b-44d1-b1cc-85732b09b2ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f7f2dea58f360       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   21 seconds ago       Running             kube-controller-manager   2                   c9fa80b1ebf13       kube-controller-manager-pause-729494
	4d0db03afd590       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   21 seconds ago       Running             kube-scheduler            2                   a90b43ac79dc5       kube-scheduler-pause-729494
	0e14b4f24dcba       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 seconds ago       Running             kube-apiserver            2                   d3458d7ef61b3       kube-apiserver-pause-729494
	96fb536ef57f7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago       Running             etcd                      2                   17ef1e7cdb8ed       etcd-pause-729494
	a9e32e54acba6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago       Running             coredns                   1                   146b410f0c64f       coredns-7c65d6cfc9-2x9sx
	2a8a09df850d5       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   28 seconds ago       Running             kube-proxy                1                   3b29207505049       kube-proxy-nllwf
	ace56dcad4dec       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   28 seconds ago       Exited              kube-apiserver            1                   d3458d7ef61b3       kube-apiserver-pause-729494
	5741524d0fbbc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   28 seconds ago       Exited              kube-controller-manager   1                   c9fa80b1ebf13       kube-controller-manager-pause-729494
	b185d1303bb1d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   28 seconds ago       Exited              kube-scheduler            1                   a90b43ac79dc5       kube-scheduler-pause-729494
	02770335761f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago       Exited              etcd                      1                   17ef1e7cdb8ed       etcd-pause-729494
	8072e0f3b1595       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   3a497834ed50f       coredns-7c65d6cfc9-2x9sx
	128cedd262b0e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   About a minute ago   Exited              kube-proxy                0                   1e064f7d3e188       kube-proxy-nllwf
	
	
	==> coredns [8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42376 - 52013 "HINFO IN 1982915378677061536.8477218873360209454. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0196642s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-729494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-729494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=pause-729494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_01_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:01:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-729494
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:02:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.55
	  Hostname:    pause-729494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 63324c7aee4148babf6e389e90938a65
	  System UUID:                63324c7a-ee41-48ba-bf6e-389e90938a65
	  Boot ID:                    30071432-2695-4783-b7af-61b13af0d389
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2x9sx                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     98s
	  kube-system                 etcd-pause-729494                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         103s
	  kube-system                 kube-apiserver-pause-729494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-pause-729494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-nllwf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-scheduler-pause-729494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     109s (x7 over 110s)  kubelet          Node pause-729494 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  109s (x8 over 110s)  kubelet          Node pause-729494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 110s)  kubelet          Node pause-729494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node pause-729494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node pause-729494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node pause-729494 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeReady                102s                 kubelet          Node pause-729494 status is now: NodeReady
	  Normal  RegisteredNode           99s                  node-controller  Node pause-729494 event: Registered Node pause-729494 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-729494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-729494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-729494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                  node-controller  Node pause-729494 event: Registered Node pause-729494 in Controller
	
	
	==> dmesg <==
	[  +0.059870] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057580] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.215578] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.111529] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.282862] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.375924] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +0.069496] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.012504] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.075155] kauditd_printk_skb: 18 callbacks suppressed
	[Oct28 12:01] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.080087] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.865879] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +0.312441] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.235534] kauditd_printk_skb: 50 callbacks suppressed
	[Oct28 12:02] systemd-fstab-generator[2012]: Ignoring "noauto" option for root device
	[  +0.211999] systemd-fstab-generator[2024]: Ignoring "noauto" option for root device
	[  +0.244627] systemd-fstab-generator[2038]: Ignoring "noauto" option for root device
	[  +0.155247] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +0.337556] systemd-fstab-generator[2078]: Ignoring "noauto" option for root device
	[  +6.302279] systemd-fstab-generator[2199]: Ignoring "noauto" option for root device
	[  +0.082297] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.562266] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.279950] systemd-fstab-generator[2964]: Ignoring "noauto" option for root device
	[  +4.607654] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.721203] systemd-fstab-generator[3321]: Ignoring "noauto" option for root device
	
	
	==> etcd [02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f] <==
	{"level":"info","ts":"2024-10-28T12:02:20.067168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:02:20.067221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgPreVoteResp from 328c932a5e3b8b76 at term 2"}
	{"level":"info","ts":"2024-10-28T12:02:20.067287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.067325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgVoteResp from 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.067356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.067381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 328c932a5e3b8b76 elected leader 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.076150Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"328c932a5e3b8b76","local-member-attributes":"{Name:pause-729494 ClientURLs:[https://192.168.50.55:2379]}","request-path":"/0/members/328c932a5e3b8b76/attributes","cluster-id":"e0630d851be0da94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:02:20.076948Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:20.083480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:20.088785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.55:2379"}
	{"level":"info","ts":"2024-10-28T12:02:20.093070Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:20.094653Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:20.099080Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:20.099143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:20.106176Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:02:23.394458Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-28T12:02:23.394559Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-729494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"]}
	{"level":"warn","ts":"2024-10-28T12:02:23.394727Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T12:02:23.394769Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T12:02:23.396555Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.55:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T12:02:23.396653Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.55:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-28T12:02:23.396861Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"328c932a5e3b8b76","current-leader-member-id":"328c932a5e3b8b76"}
	{"level":"info","ts":"2024-10-28T12:02:23.402210Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:23.402326Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:23.402357Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-729494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"]}
	
	
	==> etcd [96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071] <==
	{"level":"info","ts":"2024-10-28T12:02:26.218399Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e0630d851be0da94","local-member-id":"328c932a5e3b8b76","added-peer-id":"328c932a5e3b8b76","added-peer-peer-urls":["https://192.168.50.55:2380"]}
	{"level":"info","ts":"2024-10-28T12:02:26.218563Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0630d851be0da94","local-member-id":"328c932a5e3b8b76","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:02:26.218618Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:02:26.222395Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T12:02:26.237058Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"328c932a5e3b8b76","initial-advertise-peer-urls":["https://192.168.50.55:2380"],"listen-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:02:26.230871Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:26.239929Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:02:26.240088Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:27.976090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:27.976152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:27.976196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgPreVoteResp from 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:27.976210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became candidate at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.976216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgVoteResp from 328c932a5e3b8b76 at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.976224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became leader at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.976231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 328c932a5e3b8b76 elected leader 328c932a5e3b8b76 at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.977637Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"328c932a5e3b8b76","local-member-attributes":"{Name:pause-729494 ClientURLs:[https://192.168.50.55:2379]}","request-path":"/0/members/328c932a5e3b8b76/attributes","cluster-id":"e0630d851be0da94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:02:27.977684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:27.977663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:27.978737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:27.979028Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:27.979807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:02:27.979941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.55:2379"}
	{"level":"info","ts":"2024-10-28T12:02:27.980072Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:27.980104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:43.025936Z","caller":"traceutil/trace.go:171","msg":"trace[647129839] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"192.675328ms","start":"2024-10-28T12:02:42.833151Z","end":"2024-10-28T12:02:43.025826Z","steps":["trace[647129839] 'process raft request'  (duration: 192.4452ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:02:47 up 2 min,  0 users,  load average: 0.96, 0.28, 0.09
	Linux pause-729494 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f] <==
	I1028 12:02:29.315963       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 12:02:29.318056       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 12:02:29.318589       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 12:02:29.326419       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 12:02:29.326520       1 policy_source.go:224] refreshing policies
	I1028 12:02:29.351991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 12:02:29.359869       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 12:02:29.375858       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 12:02:29.376134       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 12:02:29.377646       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1028 12:02:29.381098       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 12:02:29.382090       1 aggregator.go:171] initial CRD sync complete...
	I1028 12:02:29.382155       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 12:02:29.382179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 12:02:29.382203       1 cache.go:39] Caches are synced for autoregister controller
	I1028 12:02:29.392201       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 12:02:29.403491       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 12:02:30.219703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 12:02:31.070229       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 12:02:31.085289       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 12:02:31.133700       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 12:02:31.173072       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 12:02:31.180419       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 12:02:33.031263       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 12:02:33.080240       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff] <==
	E1028 12:02:22.037104       1 customresource_discovery_controller.go:295] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	F1028 12:02:22.037168       1 hooks.go:210] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I1028 12:02:22.157003       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1028 12:02:22.157156       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1028 12:02:22.157241       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for cluster_authentication_trust_controller" logger="UnhandledError"
	I1028 12:02:22.160992       1 cluster_authentication_trust_controller.go:451] Shutting down cluster_authentication_trust_controller controller
	I1028 12:02:22.161085       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	E1028 12:02:22.161292       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for configmaps" logger="UnhandledError"
	E1028 12:02:22.161391       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	E1028 12:02:22.161436       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	F1028 12:02:22.161475       1 hooks.go:210] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E1028 12:02:22.251206       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	E1028 12:02:22.251329       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	I1028 12:02:22.251968       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	I1028 12:02:22.252082       1 apiapproval_controller.go:193] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1028 12:02:22.252133       1 nonstructuralschema_controller.go:199] Shutting down NonStructuralSchemaConditionController
	I1028 12:02:22.252184       1 establishing_controller.go:85] Shutting down EstablishingController
	I1028 12:02:22.252220       1 naming_controller.go:298] Shutting down NamingConditionController
	E1028 12:02:22.252258       1 controller.go:95] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1028 12:02:22.252297       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1028 12:02:22.252330       1 apiservice_controller.go:104] Shutting down APIServiceRegistrationController
	I1028 12:02:22.252363       1 remote_available_controller.go:419] Shutting down RemoteAvailability controller
	I1028 12:02:22.252395       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1028 12:02:22.252429       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1028 12:02:22.252476       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce] <==
	I1028 12:02:19.545176       1 serving.go:386] Generated self-signed cert in-memory
	I1028 12:02:20.109797       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1028 12:02:20.109853       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:02:20.118566       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1028 12:02:20.122118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:02:20.122423       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1028 12:02:20.122999       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b] <==
	I1028 12:02:32.774230       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1028 12:02:32.796174       1 shared_informer.go:320] Caches are synced for namespace
	I1028 12:02:32.800042       1 shared_informer.go:320] Caches are synced for service account
	I1028 12:02:32.803606       1 shared_informer.go:320] Caches are synced for job
	I1028 12:02:32.808130       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 12:02:32.812095       1 shared_informer.go:320] Caches are synced for disruption
	I1028 12:02:32.872630       1 shared_informer.go:320] Caches are synced for PV protection
	I1028 12:02:32.920534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1028 12:02:32.923971       1 shared_informer.go:320] Caches are synced for PVC protection
	I1028 12:02:32.924098       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 12:02:32.927932       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 12:02:32.931875       1 shared_informer.go:320] Caches are synced for endpoint
	I1028 12:02:32.934321       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 12:02:32.946262       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 12:02:32.955166       1 shared_informer.go:320] Caches are synced for expand
	I1028 12:02:32.980050       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:02:32.996161       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:02:33.025765       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1028 12:02:33.421500       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:02:33.424959       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:02:33.425111       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 12:02:36.150774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="75.680465ms"
	I1028 12:02:36.151283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.737µs"
	I1028 12:02:36.184251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.039661ms"
	I1028 12:02:36.184718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="247.966µs"
	
	
	==> kube-proxy [128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:01:11.171403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:01:11.194834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.55"]
	E1028 12:01:11.195035       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:01:11.252677       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:01:11.252728       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:01:11.252758       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:01:11.256858       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:01:11.257260       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:01:11.257289       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:01:11.260328       1 config.go:199] "Starting service config controller"
	I1028 12:01:11.260864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:01:11.261112       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:01:11.261137       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:01:11.262676       1 config.go:328] "Starting node config controller"
	I1028 12:01:11.262706       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:01:11.364065       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:01:11.364103       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:01:11.364130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5] <==
	 >
	E1028 12:02:19.971041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:02:23.277279       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-729494\": dial tcp 192.168.50.55:8443: connect: connection refused - error from a previous attempt: unexpected EOF"
	E1028 12:02:24.340989       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-729494\": dial tcp 192.168.50.55:8443: connect: connection refused"
	I1028 12:02:29.417721       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.55"]
	E1028 12:02:29.417855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:02:29.522399       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:02:29.522493       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:02:29.522538       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:02:29.526728       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:02:29.527144       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:02:29.527188       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:02:29.528526       1 config.go:199] "Starting service config controller"
	I1028 12:02:29.528633       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:02:29.529217       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:02:29.529229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:02:29.529821       1 config.go:328] "Starting node config controller"
	I1028 12:02:29.529857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:02:29.629454       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 12:02:29.629537       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:02:29.630075       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04] <==
	I1028 12:02:26.749862       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:02:29.286736       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:02:29.286991       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:02:29.287032       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:02:29.287056       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:02:29.360511       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:02:29.362951       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:02:29.365718       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:02:29.373019       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:02:29.373413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:02:29.373455       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:02:29.473999       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7] <==
	I1028 12:02:19.384423       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:02:22.253593       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:02:22.255956       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:02:22.256036       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:02:22.256071       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:02:23.282587       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:02:23.282821       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1028 12:02:23.282943       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1028 12:02:23.286947       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:02:23.287010       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 12:02:23.287032       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I1028 12:02:23.287504       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:02:23.287552       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:02:23.287589       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1028 12:02:23.287681       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I1028 12:02:23.287791       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1028 12:02:23.287933       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.699104    2971 kubelet_node_status.go:72] "Attempting to register node" node="pause-729494"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: E1028 12:02:25.700402    2971 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.55:8443: connect: connection refused" node="pause-729494"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.794009    2971 scope.go:117] "RemoveContainer" containerID="02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.796240    2971 scope.go:117] "RemoveContainer" containerID="ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.798384    2971 scope.go:117] "RemoveContainer" containerID="5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.800114    2971 scope.go:117] "RemoveContainer" containerID="b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: E1028 12:02:25.854446    2971 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.55:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-729494.18029c31f795e9f4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-729494,UID:pause-729494,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-729494,},FirstTimestamp:2024-10-28 12:02:25.284819444 +0000 UTC m=+0.104589372,LastTimestamp:2024-10-28 12:02:25.284819444 +0000 UTC m=+0.104589372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-729494,}"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: E1028 12:02:25.916474    2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-729494?timeout=10s\": dial tcp 192.168.50.55:8443: connect: connection refused" interval="800ms"
	Oct 28 12:02:26 pause-729494 kubelet[2971]: I1028 12:02:26.102793    2971 kubelet_node_status.go:72] "Attempting to register node" node="pause-729494"
	Oct 28 12:02:26 pause-729494 kubelet[2971]: E1028 12:02:26.104396    2971 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.55:8443: connect: connection refused" node="pause-729494"
	Oct 28 12:02:26 pause-729494 kubelet[2971]: I1028 12:02:26.906289    2971 kubelet_node_status.go:72] "Attempting to register node" node="pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.432513    2971 kubelet_node_status.go:111] "Node was previously registered" node="pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.432766    2971 kubelet_node_status.go:75] "Successfully registered node" node="pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.432810    2971 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.435007    2971 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: E1028 12:02:29.545121    2971 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-729494\" already exists" pod="kube-system/etcd-pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: E1028 12:02:29.545124    2971 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-729494\" already exists" pod="kube-system/kube-apiserver-pause-729494"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.297352    2971 apiserver.go:52] "Watching apiserver"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.308701    2971 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.322717    2971 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e08aea94-206d-4bec-96b4-8fb7703efeda-xtables-lock\") pod \"kube-proxy-nllwf\" (UID: \"e08aea94-206d-4bec-96b4-8fb7703efeda\") " pod="kube-system/kube-proxy-nllwf"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.323630    2971 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e08aea94-206d-4bec-96b4-8fb7703efeda-lib-modules\") pod \"kube-proxy-nllwf\" (UID: \"e08aea94-206d-4bec-96b4-8fb7703efeda\") " pod="kube-system/kube-proxy-nllwf"
	Oct 28 12:02:35 pause-729494 kubelet[2971]: E1028 12:02:35.406771    2971 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116955403206340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:02:35 pause-729494 kubelet[2971]: E1028 12:02:35.408259    2971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116955403206340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:02:45 pause-729494 kubelet[2971]: E1028 12:02:45.410745    2971 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116965410079989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:02:45 pause-729494 kubelet[2971]: E1028 12:02:45.410946    2971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116965410079989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-729494 -n pause-729494
helpers_test.go:261: (dbg) Run:  kubectl --context pause-729494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-729494 -n pause-729494
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-729494 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-729494 logs -n 25: (1.548427102s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo docker                         | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:01 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo cat                            | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo                                | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo find                           | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-903216 sudo crio                           | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-903216                                     | cilium-903216          | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC | 28 Oct 24 12:02 UTC |
	| start   | -p NoKubernetes-606176                               | NoKubernetes-606176    | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --no-kubernetes                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=1.20                            |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p NoKubernetes-606176                               | NoKubernetes-606176    | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| stop    | stopped-upgrade-755815 stop                          | minikube               | jenkins | v1.26.0 | 28 Oct 24 12:02 UTC | 28 Oct 24 12:02 UTC |
	| start   | -p stopped-upgrade-755815                            | stopped-upgrade-755815 | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p running-upgrade-628680                            | running-upgrade-628680 | jenkins | v1.34.0 | 28 Oct 24 12:02 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:02:32
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:02:32.118572  179097 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:02:32.118822  179097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:32.118832  179097 out.go:358] Setting ErrFile to fd 2...
	I1028 12:02:32.118836  179097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:02:32.119018  179097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:02:32.119559  179097 out.go:352] Setting JSON to false
	I1028 12:02:32.120630  179097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6295,"bootTime":1730110657,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:02:32.120737  179097 start.go:139] virtualization: kvm guest
	I1028 12:02:32.122955  179097 out.go:177] * [running-upgrade-628680] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:02:32.124482  179097 notify.go:220] Checking for updates...
	I1028 12:02:32.124490  179097 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:02:32.125676  179097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:02:32.127379  179097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:02:32.128965  179097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:02:32.130315  179097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:02:32.131573  179097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:02:32.133473  179097 config.go:182] Loaded profile config "running-upgrade-628680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:02:32.134159  179097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:32.134239  179097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:32.152055  179097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I1028 12:02:32.152579  179097 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:32.153075  179097 main.go:141] libmachine: Using API Version  1
	I1028 12:02:32.153104  179097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:32.153440  179097 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:32.153638  179097 main.go:141] libmachine: (running-upgrade-628680) Calling .DriverName
	I1028 12:02:32.155390  179097 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 12:02:32.156782  179097 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:02:32.157067  179097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:32.157105  179097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:32.172331  179097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I1028 12:02:32.172766  179097 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:32.173298  179097 main.go:141] libmachine: Using API Version  1
	I1028 12:02:32.173337  179097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:32.173730  179097 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:32.173901  179097 main.go:141] libmachine: (running-upgrade-628680) Calling .DriverName
	I1028 12:02:32.211280  179097 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:02:32.212577  179097 start.go:297] selected driver: kvm2
	I1028 12:02:32.212593  179097 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-628680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-628
680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 12:02:32.212709  179097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:02:32.213402  179097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:32.213486  179097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:02:32.229628  179097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:02:32.230089  179097 cni.go:84] Creating CNI manager for ""
	I1028 12:02:32.230151  179097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:02:32.230206  179097 start.go:340] cluster config:
	{Name:running-upgrade-628680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-628680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.7 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1028 12:02:32.230310  179097 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:02:32.232290  179097 out.go:177] * Starting "running-upgrade-628680" primary control-plane node in "running-upgrade-628680" cluster
	I1028 12:02:34.186711  178984 start.go:364] duration metric: took 11.91470295s to acquireMachinesLock for "stopped-upgrade-755815"
	I1028 12:02:34.186777  178984 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:02:34.186787  178984 fix.go:54] fixHost starting: 
	I1028 12:02:34.187213  178984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:02:34.187267  178984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:02:34.205010  178984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I1028 12:02:34.205446  178984 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:02:34.205932  178984 main.go:141] libmachine: Using API Version  1
	I1028 12:02:34.205958  178984 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:02:34.206325  178984 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:02:34.206520  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .DriverName
	I1028 12:02:34.206682  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .GetState
	I1028 12:02:34.208429  178984 fix.go:112] recreateIfNeeded on stopped-upgrade-755815: state=Stopped err=<nil>
	I1028 12:02:34.208457  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .DriverName
	W1028 12:02:34.208607  178984 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:02:34.210610  178984 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-755815" ...
	I1028 12:02:30.900578  176084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:02:30.913781  176084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:02:30.934017  176084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:02:30.934114  176084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1028 12:02:30.934136  176084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1028 12:02:30.947953  176084 system_pods.go:59] 6 kube-system pods found
	I1028 12:02:30.947990  176084 system_pods.go:61] "coredns-7c65d6cfc9-2x9sx" [6c991e2e-d7bc-4aee-a537-4885075a5453] Running
	I1028 12:02:30.948002  176084 system_pods.go:61] "etcd-pause-729494" [f3e0e1e4-25c5-4343-8256-24c8080d6f9b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:02:30.948012  176084 system_pods.go:61] "kube-apiserver-pause-729494" [47a5b86a-6abd-42f5-86bb-cec0d357827c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:02:30.948022  176084 system_pods.go:61] "kube-controller-manager-pause-729494" [cdfc5376-eb2e-46ae-a83d-bbfeddb8319c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:02:30.948035  176084 system_pods.go:61] "kube-proxy-nllwf" [e08aea94-206d-4bec-96b4-8fb7703efeda] Running
	I1028 12:02:30.948048  176084 system_pods.go:61] "kube-scheduler-pause-729494" [6666e789-7fb0-4bd4-bc83-9228d9aa987d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:02:30.948056  176084 system_pods.go:74] duration metric: took 14.012984ms to wait for pod list to return data ...
	I1028 12:02:30.948066  176084 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:02:30.953672  176084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:02:30.953701  176084 node_conditions.go:123] node cpu capacity is 2
	I1028 12:02:30.953713  176084 node_conditions.go:105] duration metric: took 5.640082ms to run NodePressure ...
	I1028 12:02:30.953734  176084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:02:31.223239  176084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:02:31.227689  176084 kubeadm.go:739] kubelet initialised
	I1028 12:02:31.227716  176084 kubeadm.go:740] duration metric: took 4.442639ms waiting for restarted kubelet to initialise ...
	I1028 12:02:31.227727  176084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:31.231928  176084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:31.237192  176084 pod_ready.go:93] pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:31.237229  176084 pod_ready.go:82] duration metric: took 5.273291ms for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:31.237243  176084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:33.243840  176084 pod_ready.go:103] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:32.738576  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.738948  178661 main.go:141] libmachine: (NoKubernetes-606176) Found IP for machine: 192.168.61.189
	I1028 12:02:32.738961  178661 main.go:141] libmachine: (NoKubernetes-606176) Reserving static IP address...
	I1028 12:02:32.738973  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has current primary IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.739272  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-606176", mac: "52:54:00:44:0a:13", ip: "192.168.61.189"} in network mk-NoKubernetes-606176
	I1028 12:02:32.819412  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Getting to WaitForSSH function...
	I1028 12:02:32.819428  178661 main.go:141] libmachine: (NoKubernetes-606176) Reserved static IP address: 192.168.61.189
	I1028 12:02:32.819467  178661 main.go:141] libmachine: (NoKubernetes-606176) Waiting for SSH to be available...
	I1028 12:02:32.822128  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.822580  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:32.822604  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.822751  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Using SSH client type: external
	I1028 12:02:32.822774  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa (-rw-------)
	I1028 12:02:32.822800  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:02:32.822812  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | About to run SSH command:
	I1028 12:02:32.822834  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | exit 0
	I1028 12:02:32.953854  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | SSH cmd err, output: <nil>: 
	I1028 12:02:32.954184  178661 main.go:141] libmachine: (NoKubernetes-606176) KVM machine creation complete!
	I1028 12:02:32.954495  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetConfigRaw
	I1028 12:02:32.955085  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:32.955256  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:32.955433  178661 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:02:32.955442  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetState
	I1028 12:02:32.956861  178661 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:02:32.956868  178661 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:02:32.956872  178661 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:02:32.956876  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:32.959356  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.959708  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:32.959730  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:32.959878  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:32.960032  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:32.960191  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:32.960316  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:32.960443  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:32.960694  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:32.960700  178661 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:02:33.061207  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:33.061220  178661 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:02:33.061247  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.064317  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.064661  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.064683  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.064802  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.064983  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.065094  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.065204  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.065354  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.065560  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.065568  178661 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:02:33.170474  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:02:33.170535  178661 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:02:33.170539  178661 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:02:33.170545  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetMachineName
	I1028 12:02:33.170769  178661 buildroot.go:166] provisioning hostname "NoKubernetes-606176"
	I1028 12:02:33.170785  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetMachineName
	I1028 12:02:33.170894  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.173437  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.173847  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.173882  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.174009  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.174195  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.174353  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.174472  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.174591  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.174761  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.174767  178661 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-606176 && echo "NoKubernetes-606176" | sudo tee /etc/hostname
	I1028 12:02:33.290856  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-606176
	
	I1028 12:02:33.290877  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.293996  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.294383  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.294399  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.294582  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.294769  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.294923  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.295075  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.295274  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.295447  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.295457  178661 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-606176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-606176/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-606176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:02:33.407173  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:02:33.407192  178661 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:02:33.407219  178661 buildroot.go:174] setting up certificates
	I1028 12:02:33.407228  178661 provision.go:84] configureAuth start
	I1028 12:02:33.407236  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetMachineName
	I1028 12:02:33.407477  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:33.410477  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.410828  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.410844  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.411007  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.413106  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.413431  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.413454  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.413560  178661 provision.go:143] copyHostCerts
	I1028 12:02:33.413633  178661 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:02:33.413649  178661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:02:33.413701  178661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:02:33.413793  178661 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:02:33.413796  178661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:02:33.413814  178661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:02:33.413879  178661 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:02:33.413882  178661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:02:33.413897  178661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:02:33.413950  178661 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-606176 san=[127.0.0.1 192.168.61.189 NoKubernetes-606176 localhost minikube]
	I1028 12:02:33.551586  178661 provision.go:177] copyRemoteCerts
	I1028 12:02:33.551628  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:02:33.551650  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.554105  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.554404  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.554443  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.554632  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.554787  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.554889  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.554980  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:33.637012  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:02:33.663728  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:02:33.689940  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:02:33.715667  178661 provision.go:87] duration metric: took 308.426393ms to configureAuth
	I1028 12:02:33.715685  178661 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:02:33.715838  178661 config.go:182] Loaded profile config "NoKubernetes-606176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:33.715895  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.718464  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.718767  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.718788  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.718940  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.719107  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.719235  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.719369  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.719486  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:33.719734  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:33.719749  178661 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:02:33.946740  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:02:33.946772  178661 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:02:33.946779  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetURL
	I1028 12:02:33.948250  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | Using libvirt version 6000000
	I1028 12:02:33.951035  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.951451  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.951478  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.951610  178661 main.go:141] libmachine: Docker is up and running!
	I1028 12:02:33.951619  178661 main.go:141] libmachine: Reticulating splines...
	I1028 12:02:33.951626  178661 client.go:171] duration metric: took 24.539769855s to LocalClient.Create
	I1028 12:02:33.951646  178661 start.go:167] duration metric: took 24.539836755s to libmachine.API.Create "NoKubernetes-606176"
	I1028 12:02:33.951652  178661 start.go:293] postStartSetup for "NoKubernetes-606176" (driver="kvm2")
	I1028 12:02:33.951664  178661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:02:33.951697  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:33.951941  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:02:33.951956  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:33.954397  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.954724  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:33.954747  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:33.954925  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:33.955101  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:33.955278  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:33.955425  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:34.036990  178661 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:02:34.041614  178661 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:02:34.041646  178661 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:02:34.041712  178661 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:02:34.041777  178661 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:02:34.041857  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:02:34.052080  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:02:34.076434  178661 start.go:296] duration metric: took 124.770191ms for postStartSetup
	I1028 12:02:34.076474  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetConfigRaw
	I1028 12:02:34.077092  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:34.079831  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.080259  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.080287  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.080553  178661 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/config.json ...
	I1028 12:02:34.080763  178661 start.go:128] duration metric: took 24.694949268s to createHost
	I1028 12:02:34.080797  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:34.083156  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.083447  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.083469  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.083633  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:34.083795  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.083943  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.084079  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:34.084233  178661 main.go:141] libmachine: Using SSH client type: native
	I1028 12:02:34.084391  178661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.189 22 <nil> <nil>}
	I1028 12:02:34.084395  178661 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:02:34.186567  178661 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730116954.157685888
	
	I1028 12:02:34.186583  178661 fix.go:216] guest clock: 1730116954.157685888
	I1028 12:02:34.186591  178661 fix.go:229] Guest: 2024-10-28 12:02:34.157685888 +0000 UTC Remote: 2024-10-28 12:02:34.080768965 +0000 UTC m=+33.421890115 (delta=76.916923ms)
	I1028 12:02:34.186614  178661 fix.go:200] guest clock delta is within tolerance: 76.916923ms
	I1028 12:02:34.186630  178661 start.go:83] releasing machines lock for "NoKubernetes-606176", held for 24.800997268s
	I1028 12:02:34.186656  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.186889  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:34.190459  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.190803  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.190831  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.191084  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.191622  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.191815  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .DriverName
	I1028 12:02:34.191914  178661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:02:34.191947  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:34.192044  178661 ssh_runner.go:195] Run: cat /version.json
	I1028 12:02:34.192059  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHHostname
	I1028 12:02:34.194935  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195311  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.195331  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195349  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195538  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:34.195693  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.195805  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:34.195830  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:34.195855  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:34.195980  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHPort
	I1028 12:02:34.196050  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:34.196118  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHKeyPath
	I1028 12:02:34.196229  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetSSHUsername
	I1028 12:02:34.196361  178661 sshutil.go:53] new ssh client: &{IP:192.168.61.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/NoKubernetes-606176/id_rsa Username:docker}
	I1028 12:02:34.295944  178661 ssh_runner.go:195] Run: systemctl --version
	I1028 12:02:34.303864  178661 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:02:34.468050  178661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:02:34.475328  178661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:02:34.475392  178661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:02:34.492679  178661 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:02:34.492696  178661 start.go:495] detecting cgroup driver to use...
	I1028 12:02:34.492769  178661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:02:34.510630  178661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:02:34.526406  178661 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:02:34.526451  178661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:02:34.541808  178661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:02:34.561402  178661 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:02:34.692021  178661 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:02:34.843715  178661 docker.go:233] disabling docker service ...
	I1028 12:02:34.843773  178661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:02:34.860083  178661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:02:34.876629  178661 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:02:35.027920  178661 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:02:35.161409  178661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:02:35.176700  178661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:02:35.203638  178661 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:02:35.203715  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.217364  178661 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:02:35.217420  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.228651  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.240181  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.252876  178661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:02:35.265493  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.277733  178661 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.298891  178661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:02:35.310519  178661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:02:35.320995  178661 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:02:35.321088  178661 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:02:35.335972  178661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:02:35.348312  178661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:35.505928  178661 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:02:35.615915  178661 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:02:35.615971  178661 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:02:35.621965  178661 start.go:563] Will wait 60s for crictl version
	I1028 12:02:35.622020  178661 ssh_runner.go:195] Run: which crictl
	I1028 12:02:35.626173  178661 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:02:35.666940  178661 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:02:35.667007  178661 ssh_runner.go:195] Run: crio --version
	I1028 12:02:35.697950  178661 ssh_runner.go:195] Run: crio --version
	I1028 12:02:35.738815  178661 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:02:32.233588  179097 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1028 12:02:32.233635  179097 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I1028 12:02:32.233650  179097 cache.go:56] Caching tarball of preloaded images
	I1028 12:02:32.233773  179097 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:02:32.233789  179097 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I1028 12:02:32.233876  179097 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/running-upgrade-628680/config.json ...
	I1028 12:02:32.234077  179097 start.go:360] acquireMachinesLock for running-upgrade-628680: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:02:34.211954  178984 main.go:141] libmachine: (stopped-upgrade-755815) Calling .Start
	I1028 12:02:34.212149  178984 main.go:141] libmachine: (stopped-upgrade-755815) Ensuring networks are active...
	I1028 12:02:34.212890  178984 main.go:141] libmachine: (stopped-upgrade-755815) Ensuring network default is active
	I1028 12:02:34.213280  178984 main.go:141] libmachine: (stopped-upgrade-755815) Ensuring network mk-stopped-upgrade-755815 is active
	I1028 12:02:34.213722  178984 main.go:141] libmachine: (stopped-upgrade-755815) Getting domain xml...
	I1028 12:02:34.214521  178984 main.go:141] libmachine: (stopped-upgrade-755815) Creating domain...
	I1028 12:02:35.533435  178984 main.go:141] libmachine: (stopped-upgrade-755815) Waiting to get IP...
	I1028 12:02:35.534355  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:35.534908  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:35.534968  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:35.534887  179174 retry.go:31] will retry after 212.195899ms: waiting for machine to come up
	I1028 12:02:35.748022  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:35.748493  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:35.748525  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:35.748438  179174 retry.go:31] will retry after 386.090397ms: waiting for machine to come up
	I1028 12:02:36.136454  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:36.137296  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:36.137335  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:36.137248  179174 retry.go:31] will retry after 345.767506ms: waiting for machine to come up
	I1028 12:02:36.485093  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:36.485694  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:36.485721  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:36.485647  179174 retry.go:31] will retry after 554.902566ms: waiting for machine to come up
	I1028 12:02:37.042252  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:37.042943  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:37.042981  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:37.042864  179174 retry.go:31] will retry after 483.556813ms: waiting for machine to come up
	I1028 12:02:35.246674  176084 pod_ready.go:103] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:37.259452  176084 pod_ready.go:103] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:38.244577  176084 pod_ready.go:93] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.244603  176084 pod_ready.go:82] duration metric: took 7.007352707s for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.244613  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.755458  176084 pod_ready.go:93] pod "kube-apiserver-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.755490  176084 pod_ready.go:82] duration metric: took 510.868739ms for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.755506  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.765422  176084 pod_ready.go:93] pod "kube-controller-manager-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.765449  176084 pod_ready.go:82] duration metric: took 9.933425ms for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.765461  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.772415  176084 pod_ready.go:93] pod "kube-proxy-nllwf" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:38.772434  176084 pod_ready.go:82] duration metric: took 6.966069ms for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:38.772443  176084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:35.740536  178661 main.go:141] libmachine: (NoKubernetes-606176) Calling .GetIP
	I1028 12:02:35.744346  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:35.744800  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:0a:13", ip: ""} in network mk-NoKubernetes-606176: {Iface:virbr1 ExpiryTime:2024-10-28 13:02:26 +0000 UTC Type:0 Mac:52:54:00:44:0a:13 Iaid: IPaddr:192.168.61.189 Prefix:24 Hostname:NoKubernetes-606176 Clientid:01:52:54:00:44:0a:13}
	I1028 12:02:35.744817  178661 main.go:141] libmachine: (NoKubernetes-606176) DBG | domain NoKubernetes-606176 has defined IP address 192.168.61.189 and MAC address 52:54:00:44:0a:13 in network mk-NoKubernetes-606176
	I1028 12:02:35.745050  178661 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:02:35.750982  178661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:02:35.765870  178661 kubeadm.go:883] updating cluster {Name:NoKubernetes-606176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.2 ClusterName:NoKubernetes-606176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.189 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:02:35.765965  178661 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:02:35.766008  178661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:02:35.804023  178661 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:02:35.804107  178661 ssh_runner.go:195] Run: which lz4
	I1028 12:02:35.808986  178661 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:02:35.813680  178661 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:02:35.813713  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:02:37.494165  178661 crio.go:462] duration metric: took 1.685216387s to copy over tarball
	I1028 12:02:37.494239  178661 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:02:39.889167  178661 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.394897638s)
	I1028 12:02:39.889185  178661 crio.go:469] duration metric: took 2.394999007s to extract the tarball
	I1028 12:02:39.889192  178661 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:02:39.927396  178661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:02:39.975894  178661 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:02:39.975907  178661 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:02:39.975914  178661 kubeadm.go:934] updating node { 192.168.61.189 8443 v1.31.2 crio true true} ...
	I1028 12:02:39.976003  178661 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=NoKubernetes-606176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:NoKubernetes-606176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:02:39.976077  178661 ssh_runner.go:195] Run: crio config
	I1028 12:02:40.036739  178661 cni.go:84] Creating CNI manager for ""
	I1028 12:02:40.036750  178661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:02:40.036759  178661 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:02:40.036781  178661 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.189 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-606176 NodeName:NoKubernetes-606176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:02:40.036924  178661 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "NoKubernetes-606176"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:02:40.036993  178661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:02:40.050860  178661 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:02:40.050926  178661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:02:40.060771  178661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1028 12:02:40.080031  178661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:02:40.097386  178661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
	I1028 12:02:40.115666  178661 ssh_runner.go:195] Run: grep 192.168.61.189	control-plane.minikube.internal$ /etc/hosts
	I1028 12:02:40.120706  178661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:02:40.135110  178661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:40.292584  178661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:02:40.312060  178661 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176 for IP: 192.168.61.189
	I1028 12:02:40.312082  178661 certs.go:194] generating shared ca certs ...
	I1028 12:02:40.312096  178661 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.312292  178661 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:02:40.312345  178661 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:02:40.312353  178661 certs.go:256] generating profile certs ...
	I1028 12:02:40.312420  178661 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.key
	I1028 12:02:40.312436  178661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.crt with IP's: []
	I1028 12:02:40.533096  178661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.crt ...
	I1028 12:02:40.533112  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.crt: {Name:mk42ccbff3b47f2e90827522ac56f68ab696f8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.533324  178661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.key ...
	I1028 12:02:40.533337  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/client.key: {Name:mk6b971a9976c6de0a9371708b6a00a2c8713fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.534036  178661 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc
	I1028 12:02:40.534050  178661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.189]
	I1028 12:02:40.624366  178661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc ...
	I1028 12:02:40.624381  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc: {Name:mk023ffd5739f4e569c2704597c4ebc85a39b116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.624549  178661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc ...
	I1028 12:02:40.624556  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc: {Name:mkd9458ceffcfc7d28272577de72f60fa124ae1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.624625  178661 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt.f2a33afc -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt
	I1028 12:02:40.624696  178661 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key.f2a33afc -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key
	I1028 12:02:40.624740  178661 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key
	I1028 12:02:40.624751  178661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt with IP's: []
	I1028 12:02:37.528438  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:37.528899  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:37.528922  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:37.528858  179174 retry.go:31] will retry after 826.387192ms: waiting for machine to come up
	I1028 12:02:38.357097  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:38.357568  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:38.357624  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:38.357551  179174 retry.go:31] will retry after 768.995626ms: waiting for machine to come up
	I1028 12:02:39.128387  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:39.128967  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:39.128995  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:39.128925  179174 retry.go:31] will retry after 943.551295ms: waiting for machine to come up
	I1028 12:02:40.074186  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:40.074689  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:40.074720  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:40.074636  179174 retry.go:31] will retry after 1.137013569s: waiting for machine to come up
	I1028 12:02:41.212978  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:41.213570  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:41.213597  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:41.213484  179174 retry.go:31] will retry after 1.981073277s: waiting for machine to come up
	I1028 12:02:40.780856  176084 pod_ready.go:103] pod "kube-scheduler-pause-729494" in "kube-system" namespace has status "Ready":"False"
	I1028 12:02:43.280872  176084 pod_ready.go:93] pod "kube-scheduler-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.280903  176084 pod_ready.go:82] duration metric: took 4.508453455s for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.280914  176084 pod_ready.go:39] duration metric: took 12.053174948s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:43.280938  176084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:02:43.305816  176084 ops.go:34] apiserver oom_adj: -16
	I1028 12:02:43.305847  176084 kubeadm.go:597] duration metric: took 24.449261721s to restartPrimaryControlPlane
	I1028 12:02:43.305861  176084 kubeadm.go:394] duration metric: took 24.658223087s to StartCluster
	I1028 12:02:43.305883  176084 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:43.305970  176084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:02:43.306814  176084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:43.307057  176084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:02:43.307240  176084 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:02:43.307724  176084 config.go:182] Loaded profile config "pause-729494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:02:43.308723  176084 out.go:177] * Enabled addons: 
	I1028 12:02:43.308737  176084 out.go:177] * Verifying Kubernetes components...
	I1028 12:02:43.310769  176084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:02:43.310941  176084 addons.go:510] duration metric: took 3.713774ms for enable addons: enabled=[]
	I1028 12:02:43.585407  176084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:02:43.610233  176084 node_ready.go:35] waiting up to 6m0s for node "pause-729494" to be "Ready" ...
	I1028 12:02:43.614943  176084 node_ready.go:49] node "pause-729494" has status "Ready":"True"
	I1028 12:02:43.614982  176084 node_ready.go:38] duration metric: took 4.711656ms for node "pause-729494" to be "Ready" ...
	I1028 12:02:43.614995  176084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:43.624599  176084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.634255  176084 pod_ready.go:93] pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.634292  176084 pod_ready.go:82] duration metric: took 9.59593ms for pod "coredns-7c65d6cfc9-2x9sx" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.634308  176084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.640959  176084 pod_ready.go:93] pod "etcd-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.640986  176084 pod_ready.go:82] duration metric: took 6.669647ms for pod "etcd-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.640999  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.840907  176084 pod_ready.go:93] pod "kube-apiserver-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:43.840938  176084 pod_ready.go:82] duration metric: took 199.93075ms for pod "kube-apiserver-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:43.840953  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:44.242258  176084 pod_ready.go:93] pod "kube-controller-manager-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:44.242288  176084 pod_ready.go:82] duration metric: took 401.324613ms for pod "kube-controller-manager-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:44.242301  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:40.740435  178661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt ...
	I1028 12:02:40.740449  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt: {Name:mka461f0bd7149c619305492cab62b49f2cfc9e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.740623  178661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key ...
	I1028 12:02:40.740632  178661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key: {Name:mkaa73e54bed3cf848ca71bdcd979b6e50b24313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:02:40.740799  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:02:40.740831  178661 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:02:40.740837  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:02:40.740859  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:02:40.740876  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:02:40.740895  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:02:40.740927  178661 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:02:40.741571  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:02:40.780410  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:02:40.817454  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:02:40.849490  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:02:40.880074  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:02:40.907691  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:02:40.935695  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:02:40.964319  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/NoKubernetes-606176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:02:40.998533  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:02:41.039107  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:02:41.073550  178661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:02:41.099726  178661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:02:41.119496  178661 ssh_runner.go:195] Run: openssl version
	I1028 12:02:41.128267  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:02:41.141798  178661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:41.147071  178661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:41.147132  178661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:02:41.153744  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:02:41.166333  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:02:41.178826  178661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:02:41.184044  178661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:02:41.184105  178661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:02:41.190588  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:02:41.203959  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:02:41.217766  178661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:02:41.222881  178661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:02:41.222940  178661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:02:41.229354  178661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:02:41.243654  178661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:02:41.249226  178661 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:02:41.249283  178661 kubeadm.go:392] StartCluster: {Name:NoKubernetes-606176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:NoKubernetes-606176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.189 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:02:41.249366  178661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:02:41.249410  178661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:02:41.296980  178661 cri.go:89] found id: ""
	I1028 12:02:41.297059  178661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:02:41.308451  178661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:02:41.320583  178661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:02:41.333395  178661 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:02:41.333405  178661 kubeadm.go:157] found existing configuration files:
	
	I1028 12:02:41.333461  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:02:41.344881  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:02:41.344952  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:02:41.356519  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:02:41.367239  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:02:41.367325  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:02:41.378250  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:02:41.390443  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:02:41.390513  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:02:41.402575  178661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:02:41.414288  178661 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:02:41.414334  178661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:02:41.424933  178661 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:02:41.609002  178661 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:02:44.642979  176084 pod_ready.go:93] pod "kube-proxy-nllwf" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:44.643012  176084 pod_ready.go:82] duration metric: took 400.70208ms for pod "kube-proxy-nllwf" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:44.643028  176084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:45.044821  176084 pod_ready.go:93] pod "kube-scheduler-pause-729494" in "kube-system" namespace has status "Ready":"True"
	I1028 12:02:45.044856  176084 pod_ready.go:82] duration metric: took 401.818535ms for pod "kube-scheduler-pause-729494" in "kube-system" namespace to be "Ready" ...
	I1028 12:02:45.044868  176084 pod_ready.go:39] duration metric: took 1.429859372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:02:45.044890  176084 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:02:45.044956  176084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:02:45.066376  176084 api_server.go:72] duration metric: took 1.759277156s to wait for apiserver process to appear ...
	I1028 12:02:45.066413  176084 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:02:45.066443  176084 api_server.go:253] Checking apiserver healthz at https://192.168.50.55:8443/healthz ...
	I1028 12:02:45.075141  176084 api_server.go:279] https://192.168.50.55:8443/healthz returned 200:
	ok
	I1028 12:02:45.077401  176084 api_server.go:141] control plane version: v1.31.2
	I1028 12:02:45.077430  176084 api_server.go:131] duration metric: took 11.007517ms to wait for apiserver health ...
	I1028 12:02:45.077442  176084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:02:45.245247  176084 system_pods.go:59] 6 kube-system pods found
	I1028 12:02:45.245291  176084 system_pods.go:61] "coredns-7c65d6cfc9-2x9sx" [6c991e2e-d7bc-4aee-a537-4885075a5453] Running
	I1028 12:02:45.245300  176084 system_pods.go:61] "etcd-pause-729494" [f3e0e1e4-25c5-4343-8256-24c8080d6f9b] Running
	I1028 12:02:45.245306  176084 system_pods.go:61] "kube-apiserver-pause-729494" [47a5b86a-6abd-42f5-86bb-cec0d357827c] Running
	I1028 12:02:45.245311  176084 system_pods.go:61] "kube-controller-manager-pause-729494" [cdfc5376-eb2e-46ae-a83d-bbfeddb8319c] Running
	I1028 12:02:45.245317  176084 system_pods.go:61] "kube-proxy-nllwf" [e08aea94-206d-4bec-96b4-8fb7703efeda] Running
	I1028 12:02:45.245322  176084 system_pods.go:61] "kube-scheduler-pause-729494" [6666e789-7fb0-4bd4-bc83-9228d9aa987d] Running
	I1028 12:02:45.245330  176084 system_pods.go:74] duration metric: took 167.879776ms to wait for pod list to return data ...
	I1028 12:02:45.245339  176084 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:02:45.441748  176084 default_sa.go:45] found service account: "default"
	I1028 12:02:45.441810  176084 default_sa.go:55] duration metric: took 196.461049ms for default service account to be created ...
	I1028 12:02:45.441825  176084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:02:45.645363  176084 system_pods.go:86] 6 kube-system pods found
	I1028 12:02:45.645401  176084 system_pods.go:89] "coredns-7c65d6cfc9-2x9sx" [6c991e2e-d7bc-4aee-a537-4885075a5453] Running
	I1028 12:02:45.645409  176084 system_pods.go:89] "etcd-pause-729494" [f3e0e1e4-25c5-4343-8256-24c8080d6f9b] Running
	I1028 12:02:45.645415  176084 system_pods.go:89] "kube-apiserver-pause-729494" [47a5b86a-6abd-42f5-86bb-cec0d357827c] Running
	I1028 12:02:45.645421  176084 system_pods.go:89] "kube-controller-manager-pause-729494" [cdfc5376-eb2e-46ae-a83d-bbfeddb8319c] Running
	I1028 12:02:45.645427  176084 system_pods.go:89] "kube-proxy-nllwf" [e08aea94-206d-4bec-96b4-8fb7703efeda] Running
	I1028 12:02:45.645440  176084 system_pods.go:89] "kube-scheduler-pause-729494" [6666e789-7fb0-4bd4-bc83-9228d9aa987d] Running
	I1028 12:02:45.645450  176084 system_pods.go:126] duration metric: took 203.614298ms to wait for k8s-apps to be running ...
	I1028 12:02:45.645466  176084 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:02:45.645520  176084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:02:45.665674  176084 system_svc.go:56] duration metric: took 20.187979ms WaitForService to wait for kubelet
	I1028 12:02:45.665715  176084 kubeadm.go:582] duration metric: took 2.358625746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:02:45.665741  176084 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:02:45.842608  176084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:02:45.842640  176084 node_conditions.go:123] node cpu capacity is 2
	I1028 12:02:45.842655  176084 node_conditions.go:105] duration metric: took 176.908393ms to run NodePressure ...
	I1028 12:02:45.842670  176084 start.go:241] waiting for startup goroutines ...
	I1028 12:02:45.842679  176084 start.go:246] waiting for cluster config update ...
	I1028 12:02:45.842690  176084 start.go:255] writing updated cluster config ...
	I1028 12:02:45.843035  176084 ssh_runner.go:195] Run: rm -f paused
	I1028 12:02:45.905871  176084 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:02:45.908309  176084 out.go:177] * Done! kubectl is now configured to use "pause-729494" cluster and "default" namespace by default
	I1028 12:02:43.197919  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:43.198541  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:43.198579  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:43.198465  179174 retry.go:31] will retry after 1.932680108s: waiting for machine to come up
	I1028 12:02:45.133072  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | domain stopped-upgrade-755815 has defined MAC address 52:54:00:f7:13:4a in network mk-stopped-upgrade-755815
	I1028 12:02:45.133679  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | unable to find current IP address of domain stopped-upgrade-755815 in network mk-stopped-upgrade-755815
	I1028 12:02:45.133703  178984 main.go:141] libmachine: (stopped-upgrade-755815) DBG | I1028 12:02:45.133573  179174 retry.go:31] will retry after 3.338928169s: waiting for machine to come up
	
	
	==> CRI-O <==
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.144315336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116969144272323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58bf60b9-c9ac-4070-90c8-0b4d52ccdc5e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.144995458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63b631be-210d-4d30-bf2b-af6dcc243acd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.145066986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63b631be-210d-4d30-bf2b-af6dcc243acd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.145445471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63b631be-210d-4d30-bf2b-af6dcc243acd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.196475852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f2db186-3903-4852-8787-95dbe712d6a5 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.196570600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f2db186-3903-4852-8787-95dbe712d6a5 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.198152369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03ebac06-4966-4726-a945-bb2cb412787c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.198551217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116969198524936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03ebac06-4966-4726-a945-bb2cb412787c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.199102937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c85d086e-bffb-48af-a4ea-2f1d325c7165 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.199180775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c85d086e-bffb-48af-a4ea-2f1d325c7165 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.199559763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c85d086e-bffb-48af-a4ea-2f1d325c7165 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.252191252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4ff247b-2466-4f5b-8538-20cae9a77a83 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.252350419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4ff247b-2466-4f5b-8538-20cae9a77a83 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.253590217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7850601-1f56-4d29-9c3b-dec8dfcb7b54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.254250459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116969254218309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7850601-1f56-4d29-9c3b-dec8dfcb7b54 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.254967548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba259ff3-90cb-4d23-a069-af812b32a354 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.255063796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba259ff3-90cb-4d23-a069-af812b32a354 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.255433799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba259ff3-90cb-4d23-a069-af812b32a354 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.317232862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f44c5047-7ca3-4c09-b765-5512b076d871 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.317323992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f44c5047-7ca3-4c09-b765-5512b076d871 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.318597973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cbf9ecf-cfe8-4a23-a08d-c87c09ebed45 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.319094454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116969319063847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cbf9ecf-cfe8-4a23-a08d-c87c09ebed45 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.320359202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b68ddf6-d07a-41c9-a5d7-eaffda4401b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.320441750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b68ddf6-d07a-41c9-a5d7-eaffda4401b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:02:49 pause-729494 crio[2087]: time="2024-10-28 12:02:49.320813884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730116945841977407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730116945825446597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730116945849329060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730116945816751890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879,PodSandboxId:146b410f0c64f09b36dababd64c7f9593d9c3881cad39def597a4f41a6ca3685,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730116938958292363,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5,PodSandboxId:3b2920750504930ff1b54b5b163fe85e379df4b61717d91193ee26b2ed3db846,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730116938198809092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff,PodSandboxId:d3458d7ef61b357e5047a87e4bbe8b52429c77840571e01dc0c9676d64a45ba6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730116938184371362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a848f43c17050c81e3120bbd662949d4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce,PodSandboxId:c9fa80b1ebf13dbe006a321567b1166c519f1afa02fe6b826ba53d0295277c1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1730116938124870739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 776178f279465c23cfa0a77dc5f8fbf5,},Annotations:map[string]string{io.kubernetes
.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7,PodSandboxId:a90b43ac79dc5add04491867200a90ad4cdcaeefa13a26106d1628a93cec900b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1730116938098977616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa6017ca8f664f8c9b623b0eb5883fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16
c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f,PodSandboxId:17ef1e7cdb8ed6aeb2ae080487cb3bd629105a5185136c59caae7008ba03536e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1730116938033562049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-729494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4122c8d5619a2e403a9abcc6705e0638,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39,PodSandboxId:3a497834ed50f01f15fe615a557e694031874b1777e9b64ace2de471bc3637ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1730116871383405925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2x9sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c991e2e-d7bc-4aee-a537-4885075a5453,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18,PodSandboxId:1e064f7d3e18806b851452ab037bc4b77dcb12c1cf95f1c35fb98741c223b65c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1730116870896451174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nllwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e08aea94-206d-4bec-96b4-8fb7703efeda,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b68ddf6-d07a-41c9-a5d7-eaffda4401b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f7f2dea58f360       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   23 seconds ago       Running             kube-controller-manager   2                   c9fa80b1ebf13       kube-controller-manager-pause-729494
	4d0db03afd590       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   23 seconds ago       Running             kube-scheduler            2                   a90b43ac79dc5       kube-scheduler-pause-729494
	0e14b4f24dcba       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   23 seconds ago       Running             kube-apiserver            2                   d3458d7ef61b3       kube-apiserver-pause-729494
	96fb536ef57f7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago       Running             etcd                      2                   17ef1e7cdb8ed       etcd-pause-729494
	a9e32e54acba6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   30 seconds ago       Running             coredns                   1                   146b410f0c64f       coredns-7c65d6cfc9-2x9sx
	2a8a09df850d5       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   31 seconds ago       Running             kube-proxy                1                   3b29207505049       kube-proxy-nllwf
	ace56dcad4dec       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   31 seconds ago       Exited              kube-apiserver            1                   d3458d7ef61b3       kube-apiserver-pause-729494
	5741524d0fbbc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   31 seconds ago       Exited              kube-controller-manager   1                   c9fa80b1ebf13       kube-controller-manager-pause-729494
	b185d1303bb1d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   31 seconds ago       Exited              kube-scheduler            1                   a90b43ac79dc5       kube-scheduler-pause-729494
	02770335761f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   31 seconds ago       Exited              etcd                      1                   17ef1e7cdb8ed       etcd-pause-729494
	8072e0f3b1595       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   3a497834ed50f       coredns-7c65d6cfc9-2x9sx
	128cedd262b0e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   About a minute ago   Exited              kube-proxy                0                   1e064f7d3e188       kube-proxy-nllwf
	
	
	==> coredns [8072e0f3b15952aaca8b044d3b98884b8b3c4f595355e74b14e8c1421fe35f39] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9e32e54acba6ca349e589fa9e77c04e29b64620e3cf604219c9463ed11d5879] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42376 - 52013 "HINFO IN 1982915378677061536.8477218873360209454. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0196642s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-729494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-729494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=pause-729494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_01_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:01:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-729494
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:02:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:02:29 +0000   Mon, 28 Oct 2024 12:01:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.55
	  Hostname:    pause-729494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 63324c7aee4148babf6e389e90938a65
	  System UUID:                63324c7a-ee41-48ba-bf6e-389e90938a65
	  Boot ID:                    30071432-2695-4783-b7af-61b13af0d389
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2x9sx                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     100s
	  kube-system                 etcd-pause-729494                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         105s
	  kube-system                 kube-apiserver-pause-729494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-pause-729494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-nllwf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-pause-729494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     111s (x7 over 112s)  kubelet          Node pause-729494 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s (x8 over 112s)  kubelet          Node pause-729494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 112s)  kubelet          Node pause-729494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node pause-729494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node pause-729494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node pause-729494 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeReady                104s                 kubelet          Node pause-729494 status is now: NodeReady
	  Normal  RegisteredNode           101s                 node-controller  Node pause-729494 event: Registered Node pause-729494 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-729494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-729494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-729494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                  node-controller  Node pause-729494 event: Registered Node pause-729494 in Controller
	
	
	==> dmesg <==
	[  +0.059870] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057580] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.215578] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.111529] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.282862] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.375924] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +0.069496] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.012504] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.075155] kauditd_printk_skb: 18 callbacks suppressed
	[Oct28 12:01] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.080087] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.865879] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +0.312441] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.235534] kauditd_printk_skb: 50 callbacks suppressed
	[Oct28 12:02] systemd-fstab-generator[2012]: Ignoring "noauto" option for root device
	[  +0.211999] systemd-fstab-generator[2024]: Ignoring "noauto" option for root device
	[  +0.244627] systemd-fstab-generator[2038]: Ignoring "noauto" option for root device
	[  +0.155247] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +0.337556] systemd-fstab-generator[2078]: Ignoring "noauto" option for root device
	[  +6.302279] systemd-fstab-generator[2199]: Ignoring "noauto" option for root device
	[  +0.082297] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.562266] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.279950] systemd-fstab-generator[2964]: Ignoring "noauto" option for root device
	[  +4.607654] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.721203] systemd-fstab-generator[3321]: Ignoring "noauto" option for root device
	
	
	==> etcd [02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f] <==
	{"level":"info","ts":"2024-10-28T12:02:20.067168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:02:20.067221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgPreVoteResp from 328c932a5e3b8b76 at term 2"}
	{"level":"info","ts":"2024-10-28T12:02:20.067287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.067325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgVoteResp from 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.067356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.067381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 328c932a5e3b8b76 elected leader 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:20.076150Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"328c932a5e3b8b76","local-member-attributes":"{Name:pause-729494 ClientURLs:[https://192.168.50.55:2379]}","request-path":"/0/members/328c932a5e3b8b76/attributes","cluster-id":"e0630d851be0da94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:02:20.076948Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:20.083480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:20.088785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.55:2379"}
	{"level":"info","ts":"2024-10-28T12:02:20.093070Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:20.094653Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:20.099080Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:20.099143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:20.106176Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:02:23.394458Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-28T12:02:23.394559Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-729494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"]}
	{"level":"warn","ts":"2024-10-28T12:02:23.394727Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T12:02:23.394769Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T12:02:23.396555Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.55:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-28T12:02:23.396653Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.55:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-28T12:02:23.396861Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"328c932a5e3b8b76","current-leader-member-id":"328c932a5e3b8b76"}
	{"level":"info","ts":"2024-10-28T12:02:23.402210Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:23.402326Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:23.402357Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-729494","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"]}
	
	
	==> etcd [96fb536ef57f7c3e36642343dd4a6ca33ae6ac407b2edb67cda0dc2728d45071] <==
	{"level":"info","ts":"2024-10-28T12:02:26.218399Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e0630d851be0da94","local-member-id":"328c932a5e3b8b76","added-peer-id":"328c932a5e3b8b76","added-peer-peer-urls":["https://192.168.50.55:2380"]}
	{"level":"info","ts":"2024-10-28T12:02:26.218563Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0630d851be0da94","local-member-id":"328c932a5e3b8b76","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:02:26.218618Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:02:26.222395Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T12:02:26.237058Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"328c932a5e3b8b76","initial-advertise-peer-urls":["https://192.168.50.55:2380"],"listen-peer-urls":["https://192.168.50.55:2380"],"advertise-client-urls":["https://192.168.50.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:02:26.230871Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:26.239929Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:02:26.240088Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.55:2380"}
	{"level":"info","ts":"2024-10-28T12:02:27.976090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:27.976152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:27.976196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgPreVoteResp from 328c932a5e3b8b76 at term 3"}
	{"level":"info","ts":"2024-10-28T12:02:27.976210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became candidate at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.976216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 received MsgVoteResp from 328c932a5e3b8b76 at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.976224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"328c932a5e3b8b76 became leader at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.976231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 328c932a5e3b8b76 elected leader 328c932a5e3b8b76 at term 4"}
	{"level":"info","ts":"2024-10-28T12:02:27.977637Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"328c932a5e3b8b76","local-member-attributes":"{Name:pause-729494 ClientURLs:[https://192.168.50.55:2379]}","request-path":"/0/members/328c932a5e3b8b76/attributes","cluster-id":"e0630d851be0da94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:02:27.977684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:27.977663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:02:27.978737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:27.979028Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:02:27.979807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:02:27.979941Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.55:2379"}
	{"level":"info","ts":"2024-10-28T12:02:27.980072Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:27.980104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:02:43.025936Z","caller":"traceutil/trace.go:171","msg":"trace[647129839] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"192.675328ms","start":"2024-10-28T12:02:42.833151Z","end":"2024-10-28T12:02:43.025826Z","steps":["trace[647129839] 'process raft request'  (duration: 192.4452ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:02:49 up 2 min,  0 users,  load average: 0.96, 0.29, 0.10
	Linux pause-729494 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0e14b4f24dcbade8cd5044b6f0262ebd1543ee85fbfb9a7ca4458d10d747689f] <==
	I1028 12:02:29.315963       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1028 12:02:29.318056       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1028 12:02:29.318589       1 shared_informer.go:320] Caches are synced for configmaps
	I1028 12:02:29.326419       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1028 12:02:29.326520       1 policy_source.go:224] refreshing policies
	I1028 12:02:29.351991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1028 12:02:29.359869       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1028 12:02:29.375858       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1028 12:02:29.376134       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1028 12:02:29.377646       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1028 12:02:29.381098       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1028 12:02:29.382090       1 aggregator.go:171] initial CRD sync complete...
	I1028 12:02:29.382155       1 autoregister_controller.go:144] Starting autoregister controller
	I1028 12:02:29.382179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1028 12:02:29.382203       1 cache.go:39] Caches are synced for autoregister controller
	I1028 12:02:29.392201       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1028 12:02:29.403491       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1028 12:02:30.219703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1028 12:02:31.070229       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1028 12:02:31.085289       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1028 12:02:31.133700       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1028 12:02:31.173072       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1028 12:02:31.180419       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1028 12:02:33.031263       1 controller.go:615] quota admission added evaluator for: endpoints
	I1028 12:02:33.080240       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff] <==
	E1028 12:02:22.037104       1 customresource_discovery_controller.go:295] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	F1028 12:02:22.037168       1 hooks.go:210] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I1028 12:02:22.157003       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1028 12:02:22.157156       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1028 12:02:22.157241       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for cluster_authentication_trust_controller" logger="UnhandledError"
	I1028 12:02:22.160992       1 cluster_authentication_trust_controller.go:451] Shutting down cluster_authentication_trust_controller controller
	I1028 12:02:22.161085       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	E1028 12:02:22.161292       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for configmaps" logger="UnhandledError"
	E1028 12:02:22.161391       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	E1028 12:02:22.161436       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	F1028 12:02:22.161475       1 hooks.go:210] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E1028 12:02:22.251206       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	E1028 12:02:22.251329       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	I1028 12:02:22.251968       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	I1028 12:02:22.252082       1 apiapproval_controller.go:193] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1028 12:02:22.252133       1 nonstructuralschema_controller.go:199] Shutting down NonStructuralSchemaConditionController
	I1028 12:02:22.252184       1 establishing_controller.go:85] Shutting down EstablishingController
	I1028 12:02:22.252220       1 naming_controller.go:298] Shutting down NamingConditionController
	E1028 12:02:22.252258       1 controller.go:95] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1028 12:02:22.252297       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1028 12:02:22.252330       1 apiservice_controller.go:104] Shutting down APIServiceRegistrationController
	I1028 12:02:22.252363       1 remote_available_controller.go:419] Shutting down RemoteAvailability controller
	I1028 12:02:22.252395       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1028 12:02:22.252429       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1028 12:02:22.252476       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce] <==
	I1028 12:02:19.545176       1 serving.go:386] Generated self-signed cert in-memory
	I1028 12:02:20.109797       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1028 12:02:20.109853       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:02:20.118566       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1028 12:02:20.122118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:02:20.122423       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1028 12:02:20.122999       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [f7f2dea58f360c0808602d912722fafd05340f4122c46015ae95955a5849289b] <==
	I1028 12:02:32.774230       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1028 12:02:32.796174       1 shared_informer.go:320] Caches are synced for namespace
	I1028 12:02:32.800042       1 shared_informer.go:320] Caches are synced for service account
	I1028 12:02:32.803606       1 shared_informer.go:320] Caches are synced for job
	I1028 12:02:32.808130       1 shared_informer.go:320] Caches are synced for daemon sets
	I1028 12:02:32.812095       1 shared_informer.go:320] Caches are synced for disruption
	I1028 12:02:32.872630       1 shared_informer.go:320] Caches are synced for PV protection
	I1028 12:02:32.920534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1028 12:02:32.923971       1 shared_informer.go:320] Caches are synced for PVC protection
	I1028 12:02:32.924098       1 shared_informer.go:320] Caches are synced for persistent volume
	I1028 12:02:32.927932       1 shared_informer.go:320] Caches are synced for attach detach
	I1028 12:02:32.931875       1 shared_informer.go:320] Caches are synced for endpoint
	I1028 12:02:32.934321       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1028 12:02:32.946262       1 shared_informer.go:320] Caches are synced for stateful set
	I1028 12:02:32.955166       1 shared_informer.go:320] Caches are synced for expand
	I1028 12:02:32.980050       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:02:32.996161       1 shared_informer.go:320] Caches are synced for resource quota
	I1028 12:02:33.025765       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1028 12:02:33.421500       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:02:33.424959       1 shared_informer.go:320] Caches are synced for garbage collector
	I1028 12:02:33.425111       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1028 12:02:36.150774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="75.680465ms"
	I1028 12:02:36.151283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.737µs"
	I1028 12:02:36.184251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.039661ms"
	I1028 12:02:36.184718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="247.966µs"
	
	
	==> kube-proxy [128cedd262b0edb2663ee3b0f5d401533bcbe08d2d61bfc7f098649ad7c23b18] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:01:11.171403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:01:11.194834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.55"]
	E1028 12:01:11.195035       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:01:11.252677       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:01:11.252728       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:01:11.252758       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:01:11.256858       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:01:11.257260       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:01:11.257289       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:01:11.260328       1 config.go:199] "Starting service config controller"
	I1028 12:01:11.260864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:01:11.261112       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:01:11.261137       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:01:11.262676       1 config.go:328] "Starting node config controller"
	I1028 12:01:11.262706       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:01:11.364065       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:01:11.364103       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:01:11.364130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [2a8a09df850d5ad75285458cc3951229ff124c1cf14acaf4b8010e51936f3af5] <==
	 >
	E1028 12:02:19.971041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:02:23.277279       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-729494\": dial tcp 192.168.50.55:8443: connect: connection refused - error from a previous attempt: unexpected EOF"
	E1028 12:02:24.340989       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-729494\": dial tcp 192.168.50.55:8443: connect: connection refused"
	I1028 12:02:29.417721       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.55"]
	E1028 12:02:29.417855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:02:29.522399       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:02:29.522493       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:02:29.522538       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:02:29.526728       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:02:29.527144       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:02:29.527188       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:02:29.528526       1 config.go:199] "Starting service config controller"
	I1028 12:02:29.528633       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:02:29.529217       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:02:29.529229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:02:29.529821       1 config.go:328] "Starting node config controller"
	I1028 12:02:29.529857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:02:29.629454       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 12:02:29.629537       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:02:29.630075       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d0db03afd590754c4a1ccc7ecfc65bb421ab52ad237d3cd57eb59cf86ca2d04] <==
	I1028 12:02:26.749862       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:02:29.286736       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:02:29.286991       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:02:29.287032       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:02:29.287056       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:02:29.360511       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:02:29.362951       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:02:29.365718       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:02:29.373019       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:02:29.373413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:02:29.373455       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:02:29.473999       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7] <==
	I1028 12:02:19.384423       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:02:22.253593       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:02:22.255956       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:02:22.256036       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:02:22.256071       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:02:23.282587       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:02:23.282821       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1028 12:02:23.282943       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1028 12:02:23.286947       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:02:23.287010       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1028 12:02:23.287032       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I1028 12:02:23.287504       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:02:23.287552       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:02:23.287589       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1028 12:02:23.287681       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I1028 12:02:23.287791       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1028 12:02:23.287933       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.699104    2971 kubelet_node_status.go:72] "Attempting to register node" node="pause-729494"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: E1028 12:02:25.700402    2971 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.55:8443: connect: connection refused" node="pause-729494"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.794009    2971 scope.go:117] "RemoveContainer" containerID="02770335761f3af66b5a56bae57564857f750b9fa74aff11eef791b73c34e41f"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.796240    2971 scope.go:117] "RemoveContainer" containerID="ace56dcad4dec752f2dcbcf7676d172e1a4a1326c2e130c79e80fa2812c390ff"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.798384    2971 scope.go:117] "RemoveContainer" containerID="5741524d0fbbc1f377d92e1101721e9042e437e48f5c3fbd73e3c56fd63c9dce"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: I1028 12:02:25.800114    2971 scope.go:117] "RemoveContainer" containerID="b185d1303bb1d87c39e8d8d183c234881814c1f5901d5e4b4a8b231d0af14be7"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: E1028 12:02:25.854446    2971 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.55:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-729494.18029c31f795e9f4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-729494,UID:pause-729494,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-729494,},FirstTimestamp:2024-10-28 12:02:25.284819444 +0000 UTC m=+0.104589372,LastTimestamp:2024-10-28 12:02:25.284819444 +0000 UTC m=+0.104589372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-729494,}"
	Oct 28 12:02:25 pause-729494 kubelet[2971]: E1028 12:02:25.916474    2971 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-729494?timeout=10s\": dial tcp 192.168.50.55:8443: connect: connection refused" interval="800ms"
	Oct 28 12:02:26 pause-729494 kubelet[2971]: I1028 12:02:26.102793    2971 kubelet_node_status.go:72] "Attempting to register node" node="pause-729494"
	Oct 28 12:02:26 pause-729494 kubelet[2971]: E1028 12:02:26.104396    2971 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.55:8443: connect: connection refused" node="pause-729494"
	Oct 28 12:02:26 pause-729494 kubelet[2971]: I1028 12:02:26.906289    2971 kubelet_node_status.go:72] "Attempting to register node" node="pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.432513    2971 kubelet_node_status.go:111] "Node was previously registered" node="pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.432766    2971 kubelet_node_status.go:75] "Successfully registered node" node="pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.432810    2971 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: I1028 12:02:29.435007    2971 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: E1028 12:02:29.545121    2971 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-729494\" already exists" pod="kube-system/etcd-pause-729494"
	Oct 28 12:02:29 pause-729494 kubelet[2971]: E1028 12:02:29.545124    2971 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-729494\" already exists" pod="kube-system/kube-apiserver-pause-729494"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.297352    2971 apiserver.go:52] "Watching apiserver"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.308701    2971 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.322717    2971 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e08aea94-206d-4bec-96b4-8fb7703efeda-xtables-lock\") pod \"kube-proxy-nllwf\" (UID: \"e08aea94-206d-4bec-96b4-8fb7703efeda\") " pod="kube-system/kube-proxy-nllwf"
	Oct 28 12:02:30 pause-729494 kubelet[2971]: I1028 12:02:30.323630    2971 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e08aea94-206d-4bec-96b4-8fb7703efeda-lib-modules\") pod \"kube-proxy-nllwf\" (UID: \"e08aea94-206d-4bec-96b4-8fb7703efeda\") " pod="kube-system/kube-proxy-nllwf"
	Oct 28 12:02:35 pause-729494 kubelet[2971]: E1028 12:02:35.406771    2971 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116955403206340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:02:35 pause-729494 kubelet[2971]: E1028 12:02:35.408259    2971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116955403206340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:02:45 pause-729494 kubelet[2971]: E1028 12:02:45.410745    2971 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116965410079989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:02:45 pause-729494 kubelet[2971]: E1028 12:02:45.410946    2971 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730116965410079989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-729494 -n pause-729494
helpers_test.go:261: (dbg) Run:  kubectl --context pause-729494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (91.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (297.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-089993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-089993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m57.265468402s)

                                                
                                                
-- stdout --
	* [old-k8s-version-089993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-089993" primary control-plane node in "old-k8s-version-089993" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:05:15.619768  182116 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:05:15.619899  182116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:05:15.619908  182116 out.go:358] Setting ErrFile to fd 2...
	I1028 12:05:15.619912  182116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:05:15.620083  182116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:05:15.620650  182116 out.go:352] Setting JSON to false
	I1028 12:05:15.621670  182116 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6459,"bootTime":1730110657,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:05:15.621731  182116 start.go:139] virtualization: kvm guest
	I1028 12:05:15.624294  182116 out.go:177] * [old-k8s-version-089993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:05:15.626397  182116 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:05:15.626406  182116 notify.go:220] Checking for updates...
	I1028 12:05:15.628116  182116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:05:15.629656  182116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:05:15.631016  182116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:05:15.632410  182116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:05:15.633828  182116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:05:15.635468  182116 config.go:182] Loaded profile config "cert-expiration-601400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:05:15.635591  182116 config.go:182] Loaded profile config "cert-options-961573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:05:15.635680  182116 config.go:182] Loaded profile config "kubernetes-upgrade-337849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:05:15.635786  182116 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:05:15.672560  182116 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:05:15.674105  182116 start.go:297] selected driver: kvm2
	I1028 12:05:15.674137  182116 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:05:15.674155  182116 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:05:15.674965  182116 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:05:15.675065  182116 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:05:15.690683  182116 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:05:15.690732  182116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 12:05:15.690968  182116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:05:15.691001  182116 cni.go:84] Creating CNI manager for ""
	I1028 12:05:15.691068  182116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:05:15.691079  182116 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:05:15.691127  182116 start.go:340] cluster config:
	{Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:05:15.691227  182116 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:05:15.693258  182116 out.go:177] * Starting "old-k8s-version-089993" primary control-plane node in "old-k8s-version-089993" cluster
	I1028 12:05:15.694526  182116 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:05:15.694574  182116 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 12:05:15.694589  182116 cache.go:56] Caching tarball of preloaded images
	I1028 12:05:15.694690  182116 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:05:15.694707  182116 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 12:05:15.694881  182116 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:05:15.694920  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json: {Name:mk2d8061217be26a0966969acec7b2112abe1134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:05:15.695092  182116 start.go:360] acquireMachinesLock for old-k8s-version-089993: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:05:37.538944  182116 start.go:364] duration metric: took 21.843776936s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:05:37.539046  182116 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:05:37.539169  182116 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 12:05:37.541343  182116 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 12:05:37.541556  182116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:05:37.541616  182116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:05:37.562358  182116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I1028 12:05:37.562830  182116 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:05:37.563501  182116 main.go:141] libmachine: Using API Version  1
	I1028 12:05:37.563529  182116 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:05:37.563911  182116 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:05:37.564115  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:05:37.564290  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:05:37.564476  182116 start.go:159] libmachine.API.Create for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:05:37.564510  182116 client.go:168] LocalClient.Create starting
	I1028 12:05:37.564555  182116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 12:05:37.564601  182116 main.go:141] libmachine: Decoding PEM data...
	I1028 12:05:37.564627  182116 main.go:141] libmachine: Parsing certificate...
	I1028 12:05:37.564697  182116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 12:05:37.564726  182116 main.go:141] libmachine: Decoding PEM data...
	I1028 12:05:37.564758  182116 main.go:141] libmachine: Parsing certificate...
	I1028 12:05:37.564786  182116 main.go:141] libmachine: Running pre-create checks...
	I1028 12:05:37.564799  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .PreCreateCheck
	I1028 12:05:37.565264  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:05:37.565683  182116 main.go:141] libmachine: Creating machine...
	I1028 12:05:37.565697  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .Create
	I1028 12:05:37.565826  182116 main.go:141] libmachine: (old-k8s-version-089993) Creating KVM machine...
	I1028 12:05:37.567138  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found existing default KVM network
	I1028 12:05:37.568414  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:37.568265  182277 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:42:19:98} reservation:<nil>}
	I1028 12:05:37.569141  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:37.569046  182277 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:90:ea} reservation:<nil>}
	I1028 12:05:37.570482  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:37.570416  182277 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002891b0}
	I1028 12:05:37.570556  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | created network xml: 
	I1028 12:05:37.570577  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | <network>
	I1028 12:05:37.570589  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |   <name>mk-old-k8s-version-089993</name>
	I1028 12:05:37.570607  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |   <dns enable='no'/>
	I1028 12:05:37.570619  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |   
	I1028 12:05:37.570628  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1028 12:05:37.570640  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |     <dhcp>
	I1028 12:05:37.570649  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1028 12:05:37.570687  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |     </dhcp>
	I1028 12:05:37.570708  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |   </ip>
	I1028 12:05:37.570721  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG |   
	I1028 12:05:37.570733  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | </network>
	I1028 12:05:37.570748  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | 
	I1028 12:05:37.576440  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | trying to create private KVM network mk-old-k8s-version-089993 192.168.61.0/24...
	I1028 12:05:37.651679  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | private KVM network mk-old-k8s-version-089993 192.168.61.0/24 created
	I1028 12:05:37.651718  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:37.651611  182277 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:05:37.651749  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993 ...
	I1028 12:05:37.651777  182116 main.go:141] libmachine: (old-k8s-version-089993) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 12:05:37.651799  182116 main.go:141] libmachine: (old-k8s-version-089993) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 12:05:37.910144  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:37.910010  182277 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa...
	I1028 12:05:38.291184  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:38.291050  182277 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/old-k8s-version-089993.rawdisk...
	I1028 12:05:38.291223  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Writing magic tar header
	I1028 12:05:38.291244  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Writing SSH key tar header
	I1028 12:05:38.291258  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:38.291154  182277 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993 ...
	I1028 12:05:38.291274  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993
	I1028 12:05:38.291294  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993 (perms=drwx------)
	I1028 12:05:38.291304  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 12:05:38.291321  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:05:38.291379  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 12:05:38.291397  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 12:05:38.291407  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 12:05:38.291420  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 12:05:38.291440  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 12:05:38.291458  182116 main.go:141] libmachine: (old-k8s-version-089993) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 12:05:38.291470  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 12:05:38.291486  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home/jenkins
	I1028 12:05:38.291499  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Checking permissions on dir: /home
	I1028 12:05:38.291515  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Skipping /home - not owner
	I1028 12:05:38.291532  182116 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:05:38.292633  182116 main.go:141] libmachine: (old-k8s-version-089993) define libvirt domain using xml: 
	I1028 12:05:38.292659  182116 main.go:141] libmachine: (old-k8s-version-089993) <domain type='kvm'>
	I1028 12:05:38.292706  182116 main.go:141] libmachine: (old-k8s-version-089993)   <name>old-k8s-version-089993</name>
	I1028 12:05:38.292728  182116 main.go:141] libmachine: (old-k8s-version-089993)   <memory unit='MiB'>2200</memory>
	I1028 12:05:38.292740  182116 main.go:141] libmachine: (old-k8s-version-089993)   <vcpu>2</vcpu>
	I1028 12:05:38.292751  182116 main.go:141] libmachine: (old-k8s-version-089993)   <features>
	I1028 12:05:38.292764  182116 main.go:141] libmachine: (old-k8s-version-089993)     <acpi/>
	I1028 12:05:38.292786  182116 main.go:141] libmachine: (old-k8s-version-089993)     <apic/>
	I1028 12:05:38.292799  182116 main.go:141] libmachine: (old-k8s-version-089993)     <pae/>
	I1028 12:05:38.292815  182116 main.go:141] libmachine: (old-k8s-version-089993)     
	I1028 12:05:38.292828  182116 main.go:141] libmachine: (old-k8s-version-089993)   </features>
	I1028 12:05:38.292839  182116 main.go:141] libmachine: (old-k8s-version-089993)   <cpu mode='host-passthrough'>
	I1028 12:05:38.292850  182116 main.go:141] libmachine: (old-k8s-version-089993)   
	I1028 12:05:38.292858  182116 main.go:141] libmachine: (old-k8s-version-089993)   </cpu>
	I1028 12:05:38.292874  182116 main.go:141] libmachine: (old-k8s-version-089993)   <os>
	I1028 12:05:38.292886  182116 main.go:141] libmachine: (old-k8s-version-089993)     <type>hvm</type>
	I1028 12:05:38.292898  182116 main.go:141] libmachine: (old-k8s-version-089993)     <boot dev='cdrom'/>
	I1028 12:05:38.292909  182116 main.go:141] libmachine: (old-k8s-version-089993)     <boot dev='hd'/>
	I1028 12:05:38.292922  182116 main.go:141] libmachine: (old-k8s-version-089993)     <bootmenu enable='no'/>
	I1028 12:05:38.292932  182116 main.go:141] libmachine: (old-k8s-version-089993)   </os>
	I1028 12:05:38.292943  182116 main.go:141] libmachine: (old-k8s-version-089993)   <devices>
	I1028 12:05:38.292955  182116 main.go:141] libmachine: (old-k8s-version-089993)     <disk type='file' device='cdrom'>
	I1028 12:05:38.292970  182116 main.go:141] libmachine: (old-k8s-version-089993)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/boot2docker.iso'/>
	I1028 12:05:38.292985  182116 main.go:141] libmachine: (old-k8s-version-089993)       <target dev='hdc' bus='scsi'/>
	I1028 12:05:38.292998  182116 main.go:141] libmachine: (old-k8s-version-089993)       <readonly/>
	I1028 12:05:38.293008  182116 main.go:141] libmachine: (old-k8s-version-089993)     </disk>
	I1028 12:05:38.293019  182116 main.go:141] libmachine: (old-k8s-version-089993)     <disk type='file' device='disk'>
	I1028 12:05:38.293032  182116 main.go:141] libmachine: (old-k8s-version-089993)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 12:05:38.293050  182116 main.go:141] libmachine: (old-k8s-version-089993)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/old-k8s-version-089993.rawdisk'/>
	I1028 12:05:38.293069  182116 main.go:141] libmachine: (old-k8s-version-089993)       <target dev='hda' bus='virtio'/>
	I1028 12:05:38.293082  182116 main.go:141] libmachine: (old-k8s-version-089993)     </disk>
	I1028 12:05:38.293092  182116 main.go:141] libmachine: (old-k8s-version-089993)     <interface type='network'>
	I1028 12:05:38.293104  182116 main.go:141] libmachine: (old-k8s-version-089993)       <source network='mk-old-k8s-version-089993'/>
	I1028 12:05:38.293115  182116 main.go:141] libmachine: (old-k8s-version-089993)       <model type='virtio'/>
	I1028 12:05:38.293127  182116 main.go:141] libmachine: (old-k8s-version-089993)     </interface>
	I1028 12:05:38.293138  182116 main.go:141] libmachine: (old-k8s-version-089993)     <interface type='network'>
	I1028 12:05:38.293150  182116 main.go:141] libmachine: (old-k8s-version-089993)       <source network='default'/>
	I1028 12:05:38.293161  182116 main.go:141] libmachine: (old-k8s-version-089993)       <model type='virtio'/>
	I1028 12:05:38.293173  182116 main.go:141] libmachine: (old-k8s-version-089993)     </interface>
	I1028 12:05:38.293181  182116 main.go:141] libmachine: (old-k8s-version-089993)     <serial type='pty'>
	I1028 12:05:38.293194  182116 main.go:141] libmachine: (old-k8s-version-089993)       <target port='0'/>
	I1028 12:05:38.293202  182116 main.go:141] libmachine: (old-k8s-version-089993)     </serial>
	I1028 12:05:38.293215  182116 main.go:141] libmachine: (old-k8s-version-089993)     <console type='pty'>
	I1028 12:05:38.293226  182116 main.go:141] libmachine: (old-k8s-version-089993)       <target type='serial' port='0'/>
	I1028 12:05:38.293239  182116 main.go:141] libmachine: (old-k8s-version-089993)     </console>
	I1028 12:05:38.293250  182116 main.go:141] libmachine: (old-k8s-version-089993)     <rng model='virtio'>
	I1028 12:05:38.293267  182116 main.go:141] libmachine: (old-k8s-version-089993)       <backend model='random'>/dev/random</backend>
	I1028 12:05:38.293278  182116 main.go:141] libmachine: (old-k8s-version-089993)     </rng>
	I1028 12:05:38.293286  182116 main.go:141] libmachine: (old-k8s-version-089993)     
	I1028 12:05:38.293296  182116 main.go:141] libmachine: (old-k8s-version-089993)     
	I1028 12:05:38.293305  182116 main.go:141] libmachine: (old-k8s-version-089993)   </devices>
	I1028 12:05:38.293315  182116 main.go:141] libmachine: (old-k8s-version-089993) </domain>
	I1028 12:05:38.293327  182116 main.go:141] libmachine: (old-k8s-version-089993) 
	I1028 12:05:38.297774  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:e2:68:c4 in network default
	I1028 12:05:38.298396  182116 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:05:38.298423  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:38.299111  182116 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:05:38.299493  182116 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:05:38.300198  182116 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:05:38.300922  182116 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:05:39.650255  182116 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:05:39.651299  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:39.651811  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:39.651839  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:39.651783  182277 retry.go:31] will retry after 270.594122ms: waiting for machine to come up
	I1028 12:05:39.924626  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:39.925333  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:39.925362  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:39.925285  182277 retry.go:31] will retry after 267.092145ms: waiting for machine to come up
	I1028 12:05:40.193939  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:40.194467  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:40.194495  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:40.194408  182277 retry.go:31] will retry after 442.811082ms: waiting for machine to come up
	I1028 12:05:40.638723  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:40.639343  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:40.639372  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:40.639280  182277 retry.go:31] will retry after 530.611455ms: waiting for machine to come up
	I1028 12:05:41.171908  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:41.172529  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:41.172554  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:41.172480  182277 retry.go:31] will retry after 701.633934ms: waiting for machine to come up
	I1028 12:05:41.875928  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:41.876547  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:41.876606  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:41.876478  182277 retry.go:31] will retry after 712.731892ms: waiting for machine to come up
	I1028 12:05:42.590750  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:42.591266  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:42.591304  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:42.591215  182277 retry.go:31] will retry after 838.81515ms: waiting for machine to come up
	I1028 12:05:43.431833  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:43.432578  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:43.432609  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:43.432499  182277 retry.go:31] will retry after 1.470200826s: waiting for machine to come up
	I1028 12:05:44.905247  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:44.905753  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:44.905776  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:44.905716  182277 retry.go:31] will retry after 1.141784114s: waiting for machine to come up
	I1028 12:05:46.048640  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:46.049070  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:46.049106  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:46.049021  182277 retry.go:31] will retry after 1.572543642s: waiting for machine to come up
	I1028 12:05:47.623537  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:47.624120  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:47.624147  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:47.624078  182277 retry.go:31] will retry after 2.840648356s: waiting for machine to come up
	I1028 12:05:50.467093  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:50.467621  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:50.467646  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:50.467572  182277 retry.go:31] will retry after 3.290945129s: waiting for machine to come up
	I1028 12:05:53.760272  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:53.760762  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:53.760786  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:53.760743  182277 retry.go:31] will retry after 3.618852027s: waiting for machine to come up
	I1028 12:05:57.382859  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:05:57.383400  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:05:57.383430  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:05:57.383348  182277 retry.go:31] will retry after 4.224989514s: waiting for machine to come up
	I1028 12:06:01.613122  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:01.613685  182116 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:06:01.613711  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:01.613718  182116 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:06:01.614002  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993
	I1028 12:06:01.694985  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:06:01.695018  182116 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:06:01.695053  182116 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:06:01.697648  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:01.698156  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993
	I1028 12:06:01.698190  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find defined IP address of network mk-old-k8s-version-089993 interface with MAC address 52:54:00:50:95:38
	I1028 12:06:01.698294  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:06:01.698321  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:06:01.698349  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:06:01.698363  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:06:01.698376  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:06:01.702096  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: exit status 255: 
	I1028 12:06:01.702122  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1028 12:06:01.702132  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | command : exit 0
	I1028 12:06:01.702146  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | err     : exit status 255
	I1028 12:06:01.702158  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | output  : 
	I1028 12:06:04.702806  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:06:04.705086  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:04.705404  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:04.705437  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:04.705602  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:06:04.705630  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:06:04.705674  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:06:04.705687  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:06:04.705697  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:06:04.833744  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:06:04.833996  182116 main.go:141] libmachine: (old-k8s-version-089993) KVM machine creation complete!
	I1028 12:06:04.834347  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:06:04.834961  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:04.835161  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:04.835317  182116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:06:04.835330  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:06:04.836709  182116 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:06:04.836724  182116 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:06:04.836730  182116 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:06:04.836741  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:04.839177  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:04.839529  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:04.839557  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:04.839718  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:04.839893  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:04.840073  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:04.840216  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:04.840390  182116 main.go:141] libmachine: Using SSH client type: native
	I1028 12:06:04.840628  182116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:06:04.840641  182116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:06:04.949092  182116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:06:04.949122  182116 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:06:04.949134  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:04.952174  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:04.952473  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:04.952492  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:04.952745  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:04.953004  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:04.953214  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:04.953355  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:04.953492  182116 main.go:141] libmachine: Using SSH client type: native
	I1028 12:06:04.953701  182116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:06:04.953713  182116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:06:05.062391  182116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:06:05.062470  182116 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:06:05.062484  182116 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:06:05.062494  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:06:05.062749  182116 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:06:05.062782  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:06:05.063012  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.065843  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.066235  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.066283  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.066412  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:05.066584  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.066723  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.066853  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:05.067003  182116 main.go:141] libmachine: Using SSH client type: native
	I1028 12:06:05.067184  182116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:06:05.067195  182116 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:06:05.193261  182116 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:06:05.193304  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.196202  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.196526  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.196558  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.196760  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:05.196937  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.197060  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.197177  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:05.197308  182116 main.go:141] libmachine: Using SSH client type: native
	I1028 12:06:05.197466  182116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:06:05.197481  182116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:06:05.319643  182116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:06:05.319675  182116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:06:05.319693  182116 buildroot.go:174] setting up certificates
	I1028 12:06:05.319703  182116 provision.go:84] configureAuth start
	I1028 12:06:05.319711  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:06:05.319983  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:06:05.322817  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.323124  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.323154  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.323275  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.325727  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.326058  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.326078  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.326261  182116 provision.go:143] copyHostCerts
	I1028 12:06:05.326316  182116 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:06:05.326327  182116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:06:05.326388  182116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:06:05.326523  182116 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:06:05.326537  182116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:06:05.326574  182116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:06:05.326638  182116 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:06:05.326645  182116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:06:05.326663  182116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:06:05.326710  182116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:06:05.397884  182116 provision.go:177] copyRemoteCerts
	I1028 12:06:05.397945  182116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:06:05.397969  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.400440  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.400717  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.400752  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.400902  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:05.401098  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.401252  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:05.401380  182116 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:06:05.488367  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:06:05.517297  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:06:05.546029  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:06:05.573960  182116 provision.go:87] duration metric: took 254.244276ms to configureAuth
	I1028 12:06:05.573990  182116 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:06:05.574187  182116 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:06:05.574341  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.576937  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.577290  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.577331  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.577573  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:05.577776  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.577910  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.578068  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:05.578250  182116 main.go:141] libmachine: Using SSH client type: native
	I1028 12:06:05.578477  182116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:06:05.578509  182116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:06:05.826125  182116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:06:05.826156  182116 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:06:05.826168  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetURL
	I1028 12:06:05.827488  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using libvirt version 6000000
	I1028 12:06:05.830152  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.830492  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.830516  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.830670  182116 main.go:141] libmachine: Docker is up and running!
	I1028 12:06:05.830687  182116 main.go:141] libmachine: Reticulating splines...
	I1028 12:06:05.830709  182116 client.go:171] duration metric: took 28.266173297s to LocalClient.Create
	I1028 12:06:05.830735  182116 start.go:167] duration metric: took 28.266263024s to libmachine.API.Create "old-k8s-version-089993"
	I1028 12:06:05.830762  182116 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:06:05.830779  182116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:06:05.830810  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:05.831083  182116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:06:05.831111  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.833428  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.833739  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.833761  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.833894  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:05.834090  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.834239  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:05.834368  182116 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:06:05.920285  182116 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:06:05.925005  182116 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:06:05.925047  182116 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:06:05.925119  182116 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:06:05.925216  182116 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:06:05.925392  182116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:06:05.935306  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:06:05.962747  182116 start.go:296] duration metric: took 131.965869ms for postStartSetup
	I1028 12:06:05.962795  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:06:05.963462  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:06:05.966873  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.967308  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.967341  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.967588  182116 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:06:05.967783  182116 start.go:128] duration metric: took 28.42860045s to createHost
	I1028 12:06:05.967806  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:05.970039  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.970348  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:05.970381  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:05.970566  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:05.970759  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.970940  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:05.971072  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:05.971269  182116 main.go:141] libmachine: Using SSH client type: native
	I1028 12:06:05.971478  182116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:06:05.971491  182116 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:06:06.082481  182116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117166.049056683
	
	I1028 12:06:06.082511  182116 fix.go:216] guest clock: 1730117166.049056683
	I1028 12:06:06.082520  182116 fix.go:229] Guest: 2024-10-28 12:06:06.049056683 +0000 UTC Remote: 2024-10-28 12:06:05.96779635 +0000 UTC m=+50.388433701 (delta=81.260333ms)
	I1028 12:06:06.082559  182116 fix.go:200] guest clock delta is within tolerance: 81.260333ms
	I1028 12:06:06.082564  182116 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 28.543555517s
	I1028 12:06:06.082588  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:06.082855  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:06:06.085887  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:06.086268  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:06.086311  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:06.086432  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:06.087085  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:06.087265  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:06:06.087375  182116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:06:06.087418  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:06.087459  182116 ssh_runner.go:195] Run: cat /version.json
	I1028 12:06:06.087480  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:06:06.090380  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:06.090546  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:06.090744  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:06.090767  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:06.090954  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:06.091080  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:06.091117  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:06.091117  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:06.091280  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:06.091327  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:06:06.091406  182116 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:06:06.091464  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:06:06.091626  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:06:06.091771  182116 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:06:06.199190  182116 ssh_runner.go:195] Run: systemctl --version
	I1028 12:06:06.206127  182116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:06:06.372262  182116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:06:06.379798  182116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:06:06.379895  182116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:06:06.399761  182116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:06:06.399785  182116 start.go:495] detecting cgroup driver to use...
	I1028 12:06:06.399848  182116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:06:06.418332  182116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:06:06.434450  182116 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:06:06.434519  182116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:06:06.450362  182116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:06:06.465348  182116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:06:06.592490  182116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:06:06.731853  182116 docker.go:233] disabling docker service ...
	I1028 12:06:06.731923  182116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:06:06.746926  182116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:06:06.761047  182116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:06:06.906874  182116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:06:07.035821  182116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:06:07.050824  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:06:07.073925  182116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:06:07.074005  182116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:06:07.085593  182116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:06:07.085674  182116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:06:07.097451  182116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:06:07.108330  182116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:06:07.119820  182116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:06:07.130854  182116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:06:07.141080  182116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:06:07.141132  182116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:06:07.157759  182116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:06:07.172262  182116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:06:07.301595  182116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:06:07.414715  182116 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:06:07.414783  182116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:06:07.420509  182116 start.go:563] Will wait 60s for crictl version
	I1028 12:06:07.420556  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:07.424920  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:06:07.472176  182116 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:06:07.472268  182116 ssh_runner.go:195] Run: crio --version
	I1028 12:06:07.507905  182116 ssh_runner.go:195] Run: crio --version
	I1028 12:06:07.540609  182116 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:06:07.541960  182116 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:06:07.545114  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:07.545651  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:05:53 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:06:07.545684  182116 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:06:07.545969  182116 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:06:07.550722  182116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:06:07.564607  182116 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:06:07.564712  182116 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:06:07.564752  182116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:06:07.618527  182116 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:06:07.618653  182116 ssh_runner.go:195] Run: which lz4
	I1028 12:06:07.623409  182116 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:06:07.628252  182116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:06:07.628288  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:06:09.457220  182116 crio.go:462] duration metric: took 1.833853572s to copy over tarball
	I1028 12:06:09.457334  182116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:06:12.166368  182116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.70899581s)
	I1028 12:06:12.166410  182116 crio.go:469] duration metric: took 2.709138567s to extract the tarball
	I1028 12:06:12.166420  182116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:06:12.210822  182116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:06:12.258067  182116 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:06:12.258101  182116 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:06:12.258188  182116 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:06:12.258276  182116 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:06:12.258193  182116 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.258278  182116 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:12.258247  182116 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.258209  182116 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:12.258299  182116 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:06:12.258307  182116 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.260221  182116 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:06:12.260255  182116 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.260282  182116 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:12.260218  182116 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:12.260228  182116 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.260238  182116 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.260249  182116 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:06:12.260245  182116 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:06:12.440878  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.451165  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.468551  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.473667  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:06:12.483613  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:06:12.502016  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:12.503970  182116 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:06:12.504023  182116 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.504071  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.517545  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:12.600253  182116 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:06:12.600308  182116 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.600360  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.643679  182116 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:06:12.643718  182116 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:06:12.643728  182116 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.643752  182116 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:06:12.643781  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.643790  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.657600  182116 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:06:12.657619  182116 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:06:12.657652  182116 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:06:12.657651  182116 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:12.657700  182116 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:06:12.657705  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.657719  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.657722  182116 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:12.657702  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.657754  182116 ssh_runner.go:195] Run: which crictl
	I1028 12:06:12.657794  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.657838  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:06:12.657816  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.683385  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:12.777486  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.777568  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:06:12.777670  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.777715  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.777760  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:12.777792  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:06:12.813725  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:12.941234  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:06:12.941378  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:06:12.941397  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:06:12.941434  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:06:12.941454  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:06:12.941478  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:13.004424  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:06:13.086875  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:06:13.110882  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:06:13.110908  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:06:13.110955  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:06:13.125654  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:06:13.125669  182116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:06:13.139230  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:06:13.189213  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:06:13.195094  182116 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:06:13.393230  182116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:06:13.538764  182116 cache_images.go:92] duration metric: took 1.280639502s to LoadCachedImages
	W1028 12:06:13.538922  182116 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1028 12:06:13.538942  182116 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:06:13.539077  182116 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:06:13.539172  182116 ssh_runner.go:195] Run: crio config
	I1028 12:06:13.588793  182116 cni.go:84] Creating CNI manager for ""
	I1028 12:06:13.588825  182116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:06:13.588841  182116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:06:13.588866  182116 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:06:13.589068  182116 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:06:13.589145  182116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:06:13.599897  182116 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:06:13.599979  182116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:06:13.611836  182116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:06:13.632179  182116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:06:13.650577  182116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:06:13.668363  182116 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:06:13.672458  182116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:06:13.685780  182116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:06:13.830342  182116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:06:13.848745  182116 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:06:13.848781  182116 certs.go:194] generating shared ca certs ...
	I1028 12:06:13.848808  182116 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:13.849029  182116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:06:13.849112  182116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:06:13.849126  182116 certs.go:256] generating profile certs ...
	I1028 12:06:13.849187  182116 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:06:13.849240  182116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt with IP's: []
	I1028 12:06:13.999161  182116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt ...
	I1028 12:06:13.999190  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: {Name:mka002e14ee15b70fa277801829838bd2fca06bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:13.999366  182116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key ...
	I1028 12:06:13.999379  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key: {Name:mk70eb5079989e0289826366a7c6eea5136d2d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:13.999458  182116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:06:13.999474  182116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt.609c03ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.119]
	I1028 12:06:14.281296  182116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt.609c03ee ...
	I1028 12:06:14.281339  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt.609c03ee: {Name:mkb0a47aa4d5a31b2db5c16c4f50ad605269cf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:14.332380  182116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee ...
	I1028 12:06:14.332434  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee: {Name:mk84b3a44cbd981d9a69c247887eefbd4fdbe146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:14.332613  182116 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt.609c03ee -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt
	I1028 12:06:14.332753  182116 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key
	I1028 12:06:14.332842  182116 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:06:14.332865  182116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt with IP's: []
	I1028 12:06:14.560068  182116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt ...
	I1028 12:06:14.560106  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt: {Name:mka60b3598bad051c6a41a4de0f54776cebf9b9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:14.560307  182116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key ...
	I1028 12:06:14.560325  182116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key: {Name:mk5b3ceb0b249929d9278eb75c4b15aaa7e90261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:06:14.560582  182116 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:06:14.560641  182116 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:06:14.560665  182116 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:06:14.560710  182116 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:06:14.560744  182116 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:06:14.560777  182116 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:06:14.560841  182116 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:06:14.562199  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:06:14.592380  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:06:14.620036  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:06:14.652043  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:06:14.680580  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:06:14.709906  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:06:14.738753  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:06:14.768688  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:06:14.802563  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:06:14.831667  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:06:14.859752  182116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:06:14.895607  182116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:06:14.915334  182116 ssh_runner.go:195] Run: openssl version
	I1028 12:06:14.921945  182116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:06:14.935150  182116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:06:14.940597  182116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:06:14.940676  182116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:06:14.947467  182116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:06:14.961218  182116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:06:14.973659  182116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:06:14.978386  182116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:06:14.978445  182116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:06:14.984654  182116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:06:14.997284  182116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:06:15.009915  182116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:06:15.014890  182116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:06:15.014963  182116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:06:15.020937  182116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:06:15.033020  182116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:06:15.037562  182116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:06:15.037626  182116 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:06:15.037705  182116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:06:15.037750  182116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:06:15.076474  182116 cri.go:89] found id: ""
	I1028 12:06:15.076573  182116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:06:15.088024  182116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:06:15.098859  182116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:06:15.109499  182116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:06:15.109537  182116 kubeadm.go:157] found existing configuration files:
	
	I1028 12:06:15.109587  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:06:15.119945  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:06:15.120003  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:06:15.130899  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:06:15.141713  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:06:15.141794  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:06:15.152320  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:06:15.162221  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:06:15.162292  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:06:15.173652  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:06:15.184226  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:06:15.184298  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:06:15.196161  182116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:06:15.327879  182116 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:06:15.327948  182116 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:06:15.494457  182116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:06:15.494631  182116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:06:15.494754  182116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:06:15.731669  182116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:06:15.733442  182116 out.go:235]   - Generating certificates and keys ...
	I1028 12:06:15.733558  182116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:06:15.733689  182116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:06:16.061841  182116 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1028 12:06:16.152411  182116 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1028 12:06:16.381663  182116 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1028 12:06:16.665915  182116 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1028 12:06:16.765455  182116 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1028 12:06:16.765681  182116 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	I1028 12:06:17.069819  182116 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1028 12:06:17.070092  182116 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	I1028 12:06:17.502203  182116 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1028 12:06:17.653998  182116 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1028 12:06:17.848027  182116 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1028 12:06:17.848234  182116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:06:18.146330  182116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:06:18.358182  182116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:06:18.673014  182116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:06:18.790082  182116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:06:18.806315  182116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:06:18.807906  182116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:06:18.807977  182116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:06:18.958446  182116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:06:18.960392  182116 out.go:235]   - Booting up control plane ...
	I1028 12:06:18.960533  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:06:18.969205  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:06:18.971311  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:06:18.971452  182116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:06:18.976495  182116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:06:58.964914  182116 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:06:58.965015  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:06:58.965304  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:07:03.965333  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:07:03.965639  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:07:13.965596  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:07:13.965794  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:07:33.965720  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:07:33.966008  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:08:13.966203  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:08:13.966447  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:08:13.966461  182116 kubeadm.go:310] 
	I1028 12:08:13.966495  182116 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:08:13.966559  182116 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:08:13.966587  182116 kubeadm.go:310] 
	I1028 12:08:13.966619  182116 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:08:13.966668  182116 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:08:13.966777  182116 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:08:13.966789  182116 kubeadm.go:310] 
	I1028 12:08:13.966896  182116 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:08:13.966926  182116 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:08:13.966955  182116 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:08:13.966961  182116 kubeadm.go:310] 
	I1028 12:08:13.967061  182116 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:08:13.967192  182116 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:08:13.967218  182116 kubeadm.go:310] 
	I1028 12:08:13.967372  182116 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:08:13.967457  182116 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:08:13.967580  182116 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:08:13.967693  182116 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:08:13.967706  182116 kubeadm.go:310] 
	I1028 12:08:13.968111  182116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:08:13.968202  182116 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:08:13.968280  182116 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1028 12:08:13.968415  182116 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-089993] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:08:13.968460  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:08:15.223161  182116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.254671159s)
	I1028 12:08:15.223268  182116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:08:15.238671  182116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:08:15.249209  182116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:08:15.249230  182116 kubeadm.go:157] found existing configuration files:
	
	I1028 12:08:15.249281  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:08:15.262031  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:08:15.262109  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:08:15.275345  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:08:15.287372  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:08:15.287432  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:08:15.297469  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:08:15.307195  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:08:15.307253  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:08:15.319228  182116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:08:15.330066  182116 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:08:15.330123  182116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:08:15.339996  182116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:08:15.419757  182116 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:08:15.419879  182116 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:08:15.574408  182116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:08:15.574600  182116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:08:15.574740  182116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:08:15.786032  182116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:08:15.787564  182116 out.go:235]   - Generating certificates and keys ...
	I1028 12:08:15.787698  182116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:08:15.787792  182116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:08:15.787896  182116 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:08:15.787976  182116 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:08:15.788069  182116 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:08:15.788142  182116 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:08:15.788229  182116 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:08:15.788347  182116 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:08:15.788495  182116 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:08:15.788612  182116 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:08:15.788666  182116 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:08:15.788742  182116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:08:15.928496  182116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:08:16.123628  182116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:08:16.703518  182116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:08:17.024252  182116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:08:17.046272  182116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:08:17.047494  182116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:08:17.047564  182116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:08:17.201776  182116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:08:17.203693  182116 out.go:235]   - Booting up control plane ...
	I1028 12:08:17.203858  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:08:17.208480  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:08:17.209391  182116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:08:17.211757  182116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:08:17.214470  182116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:08:57.213323  182116 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:08:57.213624  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:08:57.213878  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:09:02.214254  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:09:02.214509  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:09:12.214571  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:09:12.214787  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:09:32.215294  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:09:32.215514  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:10:12.216556  182116 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:10:12.216925  182116 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:10:12.216948  182116 kubeadm.go:310] 
	I1028 12:10:12.216991  182116 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:10:12.217033  182116 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:10:12.217041  182116 kubeadm.go:310] 
	I1028 12:10:12.217069  182116 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:10:12.217117  182116 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:10:12.217253  182116 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:10:12.217261  182116 kubeadm.go:310] 
	I1028 12:10:12.217353  182116 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:10:12.217383  182116 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:10:12.217448  182116 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:10:12.217476  182116 kubeadm.go:310] 
	I1028 12:10:12.217619  182116 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:10:12.217745  182116 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:10:12.217763  182116 kubeadm.go:310] 
	I1028 12:10:12.217926  182116 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:10:12.218030  182116 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:10:12.218101  182116 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:10:12.218161  182116 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:10:12.218180  182116 kubeadm.go:310] 
	I1028 12:10:12.219932  182116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:10:12.220020  182116 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:10:12.220129  182116 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:10:12.220143  182116 kubeadm.go:394] duration metric: took 3m57.182518698s to StartCluster
	I1028 12:10:12.220191  182116 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:10:12.220250  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:10:12.264932  182116 cri.go:89] found id: ""
	I1028 12:10:12.264969  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.264980  182116 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:10:12.264988  182116 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:10:12.265052  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:10:12.304427  182116 cri.go:89] found id: ""
	I1028 12:10:12.304460  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.304471  182116 logs.go:284] No container was found matching "etcd"
	I1028 12:10:12.304479  182116 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:10:12.304542  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:10:12.341715  182116 cri.go:89] found id: ""
	I1028 12:10:12.341750  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.341761  182116 logs.go:284] No container was found matching "coredns"
	I1028 12:10:12.341769  182116 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:10:12.341829  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:10:12.378639  182116 cri.go:89] found id: ""
	I1028 12:10:12.378666  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.378674  182116 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:10:12.378681  182116 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:10:12.378735  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:10:12.414156  182116 cri.go:89] found id: ""
	I1028 12:10:12.414190  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.414200  182116 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:10:12.414208  182116 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:10:12.414267  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:10:12.451126  182116 cri.go:89] found id: ""
	I1028 12:10:12.451154  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.451162  182116 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:10:12.451168  182116 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:10:12.451217  182116 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:10:12.487268  182116 cri.go:89] found id: ""
	I1028 12:10:12.487299  182116 logs.go:282] 0 containers: []
	W1028 12:10:12.487307  182116 logs.go:284] No container was found matching "kindnet"
	I1028 12:10:12.487322  182116 logs.go:123] Gathering logs for dmesg ...
	I1028 12:10:12.487338  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:10:12.501253  182116 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:10:12.501282  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:10:12.630367  182116 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:10:12.630397  182116 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:10:12.630413  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:10:12.734595  182116 logs.go:123] Gathering logs for container status ...
	I1028 12:10:12.734633  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:10:12.777591  182116 logs.go:123] Gathering logs for kubelet ...
	I1028 12:10:12.777623  182116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:10:12.827281  182116 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:10:12.827358  182116 out.go:270] * 
	* 
	W1028 12:10:12.827412  182116 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:10:12.827425  182116 out.go:270] * 
	* 
	W1028 12:10:12.828253  182116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:10:12.831570  182116 out.go:201] 
	W1028 12:10:12.833136  182116 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:10:12.833183  182116 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:10:12.833209  182116 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:10:12.834804  182116 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-089993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 6 (240.282554ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:10:13.119515  185157 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089993" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (297.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-871884 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-871884 --alsologtostderr -v=3: exit status 82 (2m0.567954609s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-871884"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:08:04.740033  183878 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:08:04.740157  183878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:08:04.740170  183878 out.go:358] Setting ErrFile to fd 2...
	I1028 12:08:04.740177  183878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:08:04.740344  183878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:08:04.740588  183878 out.go:352] Setting JSON to false
	I1028 12:08:04.740674  183878 mustload.go:65] Loading cluster: no-preload-871884
	I1028 12:08:04.741032  183878 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:08:04.741114  183878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:08:04.741301  183878 mustload.go:65] Loading cluster: no-preload-871884
	I1028 12:08:04.741494  183878 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:08:04.741549  183878 stop.go:39] StopHost: no-preload-871884
	I1028 12:08:04.741989  183878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:08:04.742071  183878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:08:04.757272  183878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I1028 12:08:04.757877  183878 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:08:04.758522  183878 main.go:141] libmachine: Using API Version  1
	I1028 12:08:04.758541  183878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:08:04.758905  183878 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:08:04.761389  183878 out.go:177] * Stopping node "no-preload-871884"  ...
	I1028 12:08:04.763252  183878 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:08:04.763304  183878 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:08:04.763586  183878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:08:04.763618  183878 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:08:04.766841  183878 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:08:04.767239  183878 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:06:22 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:08:04.767269  183878 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:08:04.767490  183878 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:08:04.767652  183878 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:08:04.767774  183878 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:08:04.767916  183878 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:08:04.901040  183878 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:08:04.965048  183878 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:08:05.036770  183878 main.go:141] libmachine: Stopping "no-preload-871884"...
	I1028 12:08:05.036804  183878 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:08:05.038674  183878 main.go:141] libmachine: (no-preload-871884) Calling .Stop
	I1028 12:08:05.042866  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 0/120
	I1028 12:08:06.044614  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 1/120
	I1028 12:08:07.046175  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 2/120
	I1028 12:08:08.048364  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 3/120
	I1028 12:08:09.050821  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 4/120
	I1028 12:08:10.052689  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 5/120
	I1028 12:08:11.054403  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 6/120
	I1028 12:08:12.056310  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 7/120
	I1028 12:08:13.057718  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 8/120
	I1028 12:08:14.060181  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 9/120
	I1028 12:08:15.062622  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 10/120
	I1028 12:08:16.064176  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 11/120
	I1028 12:08:17.065695  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 12/120
	I1028 12:08:18.067267  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 13/120
	I1028 12:08:19.069040  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 14/120
	I1028 12:08:20.070785  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 15/120
	I1028 12:08:21.072684  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 16/120
	I1028 12:08:22.074441  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 17/120
	I1028 12:08:23.076065  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 18/120
	I1028 12:08:24.077519  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 19/120
	I1028 12:08:25.079945  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 20/120
	I1028 12:08:26.081234  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 21/120
	I1028 12:08:27.082564  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 22/120
	I1028 12:08:28.084170  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 23/120
	I1028 12:08:29.085751  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 24/120
	I1028 12:08:30.088005  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 25/120
	I1028 12:08:31.089515  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 26/120
	I1028 12:08:32.090902  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 27/120
	I1028 12:08:33.092527  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 28/120
	I1028 12:08:34.094260  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 29/120
	I1028 12:08:35.096699  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 30/120
	I1028 12:08:36.098360  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 31/120
	I1028 12:08:37.100573  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 32/120
	I1028 12:08:38.101878  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 33/120
	I1028 12:08:39.103384  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 34/120
	I1028 12:08:40.105639  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 35/120
	I1028 12:08:41.107089  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 36/120
	I1028 12:08:42.109192  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 37/120
	I1028 12:08:43.110910  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 38/120
	I1028 12:08:44.112178  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 39/120
	I1028 12:08:45.114433  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 40/120
	I1028 12:08:46.116083  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 41/120
	I1028 12:08:47.117683  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 42/120
	I1028 12:08:48.120718  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 43/120
	I1028 12:08:49.122362  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 44/120
	I1028 12:08:50.124396  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 45/120
	I1028 12:08:51.125949  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 46/120
	I1028 12:08:52.128172  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 47/120
	I1028 12:08:53.129746  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 48/120
	I1028 12:08:54.132322  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 49/120
	I1028 12:08:55.134035  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 50/120
	I1028 12:08:56.135430  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 51/120
	I1028 12:08:57.137010  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 52/120
	I1028 12:08:58.138615  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 53/120
	I1028 12:08:59.140294  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 54/120
	I1028 12:09:00.141979  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 55/120
	I1028 12:09:01.143447  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 56/120
	I1028 12:09:02.144623  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 57/120
	I1028 12:09:03.145955  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 58/120
	I1028 12:09:04.148335  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 59/120
	I1028 12:09:05.150412  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 60/120
	I1028 12:09:06.151994  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 61/120
	I1028 12:09:07.153339  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 62/120
	I1028 12:09:08.154800  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 63/120
	I1028 12:09:09.156356  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 64/120
	I1028 12:09:10.158553  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 65/120
	I1028 12:09:11.160287  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 66/120
	I1028 12:09:12.161618  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 67/120
	I1028 12:09:13.163028  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 68/120
	I1028 12:09:14.164616  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 69/120
	I1028 12:09:15.166237  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 70/120
	I1028 12:09:16.168300  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 71/120
	I1028 12:09:17.169934  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 72/120
	I1028 12:09:18.171437  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 73/120
	I1028 12:09:19.172988  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 74/120
	I1028 12:09:20.175001  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 75/120
	I1028 12:09:21.176633  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 76/120
	I1028 12:09:22.178227  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 77/120
	I1028 12:09:23.179655  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 78/120
	I1028 12:09:24.181170  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 79/120
	I1028 12:09:25.182861  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 80/120
	I1028 12:09:26.184500  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 81/120
	I1028 12:09:27.185953  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 82/120
	I1028 12:09:28.188165  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 83/120
	I1028 12:09:29.189568  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 84/120
	I1028 12:09:30.191991  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 85/120
	I1028 12:09:31.194508  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 86/120
	I1028 12:09:32.196504  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 87/120
	I1028 12:09:33.197767  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 88/120
	I1028 12:09:34.200012  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 89/120
	I1028 12:09:35.202381  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 90/120
	I1028 12:09:36.203989  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 91/120
	I1028 12:09:37.205588  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 92/120
	I1028 12:09:38.207041  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 93/120
	I1028 12:09:39.209008  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 94/120
	I1028 12:09:40.210926  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 95/120
	I1028 12:09:41.212470  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 96/120
	I1028 12:09:42.214155  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 97/120
	I1028 12:09:43.215572  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 98/120
	I1028 12:09:44.216907  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 99/120
	I1028 12:09:45.219160  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 100/120
	I1028 12:09:46.221256  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 101/120
	I1028 12:09:47.222702  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 102/120
	I1028 12:09:48.224194  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 103/120
	I1028 12:09:49.225736  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 104/120
	I1028 12:09:50.227447  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 105/120
	I1028 12:09:51.229186  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 106/120
	I1028 12:09:52.230589  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 107/120
	I1028 12:09:53.232261  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 108/120
	I1028 12:09:54.233809  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 109/120
	I1028 12:09:55.236264  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 110/120
	I1028 12:09:56.237804  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 111/120
	I1028 12:09:57.239316  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 112/120
	I1028 12:09:58.240639  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 113/120
	I1028 12:09:59.242165  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 114/120
	I1028 12:10:00.244434  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 115/120
	I1028 12:10:01.245907  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 116/120
	I1028 12:10:02.247410  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 117/120
	I1028 12:10:03.249392  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 118/120
	I1028 12:10:04.251099  183878 main.go:141] libmachine: (no-preload-871884) Waiting for machine to stop 119/120
	I1028 12:10:05.251706  183878 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:10:05.251768  183878 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:10:05.254102  183878 out.go:201] 
	W1028 12:10:05.255658  183878 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:10:05.255675  183878 out.go:270] * 
	* 
	W1028 12:10:05.258406  183878 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:10:05.260002  183878 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-871884 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884
E1028 12:10:09.886390  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884: exit status 3 (18.583569165s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:10:23.845844  185108 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host
	E1028 12:10:23.845866  185108 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-871884" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-709250 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-709250 --alsologtostderr -v=3: exit status 82 (2m0.50157763s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-709250"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:09:04.392296  184798 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:09:04.392550  184798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:09:04.392560  184798 out.go:358] Setting ErrFile to fd 2...
	I1028 12:09:04.392564  184798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:09:04.392758  184798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:09:04.392971  184798 out.go:352] Setting JSON to false
	I1028 12:09:04.393074  184798 mustload.go:65] Loading cluster: embed-certs-709250
	I1028 12:09:04.393438  184798 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:09:04.393500  184798 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:09:04.393707  184798 mustload.go:65] Loading cluster: embed-certs-709250
	I1028 12:09:04.393813  184798 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:09:04.393837  184798 stop.go:39] StopHost: embed-certs-709250
	I1028 12:09:04.394173  184798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:09:04.394227  184798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:09:04.411506  184798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I1028 12:09:04.411963  184798 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:09:04.412560  184798 main.go:141] libmachine: Using API Version  1
	I1028 12:09:04.412585  184798 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:09:04.412916  184798 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:09:04.415690  184798 out.go:177] * Stopping node "embed-certs-709250"  ...
	I1028 12:09:04.417179  184798 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:09:04.417207  184798 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:09:04.417434  184798 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:09:04.417459  184798 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:09:04.420496  184798 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:09:04.420931  184798 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:08:11 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:09:04.420977  184798 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:09:04.421129  184798 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:09:04.421302  184798 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:09:04.421433  184798 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:09:04.421567  184798 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:09:04.514307  184798 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:09:04.573770  184798 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:09:04.632303  184798 main.go:141] libmachine: Stopping "embed-certs-709250"...
	I1028 12:09:04.632333  184798 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:09:04.634019  184798 main.go:141] libmachine: (embed-certs-709250) Calling .Stop
	I1028 12:09:04.637607  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 0/120
	I1028 12:09:05.639122  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 1/120
	I1028 12:09:06.640427  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 2/120
	I1028 12:09:07.641907  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 3/120
	I1028 12:09:08.643374  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 4/120
	I1028 12:09:09.645626  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 5/120
	I1028 12:09:10.647220  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 6/120
	I1028 12:09:11.648741  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 7/120
	I1028 12:09:12.650167  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 8/120
	I1028 12:09:13.652152  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 9/120
	I1028 12:09:14.654707  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 10/120
	I1028 12:09:15.656868  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 11/120
	I1028 12:09:16.658267  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 12/120
	I1028 12:09:17.659811  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 13/120
	I1028 12:09:18.661182  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 14/120
	I1028 12:09:19.662980  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 15/120
	I1028 12:09:20.664404  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 16/120
	I1028 12:09:21.665915  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 17/120
	I1028 12:09:22.667414  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 18/120
	I1028 12:09:23.669096  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 19/120
	I1028 12:09:24.671650  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 20/120
	I1028 12:09:25.673571  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 21/120
	I1028 12:09:26.675171  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 22/120
	I1028 12:09:27.677089  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 23/120
	I1028 12:09:28.678651  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 24/120
	I1028 12:09:29.680613  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 25/120
	I1028 12:09:30.682086  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 26/120
	I1028 12:09:31.683436  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 27/120
	I1028 12:09:32.684886  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 28/120
	I1028 12:09:33.686508  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 29/120
	I1028 12:09:34.688614  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 30/120
	I1028 12:09:35.690172  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 31/120
	I1028 12:09:36.691580  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 32/120
	I1028 12:09:37.693430  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 33/120
	I1028 12:09:38.695125  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 34/120
	I1028 12:09:39.697400  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 35/120
	I1028 12:09:40.698863  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 36/120
	I1028 12:09:41.700188  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 37/120
	I1028 12:09:42.701995  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 38/120
	I1028 12:09:43.703925  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 39/120
	I1028 12:09:44.705998  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 40/120
	I1028 12:09:45.707559  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 41/120
	I1028 12:09:46.708781  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 42/120
	I1028 12:09:47.710231  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 43/120
	I1028 12:09:48.711965  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 44/120
	I1028 12:09:49.714132  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 45/120
	I1028 12:09:50.715598  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 46/120
	I1028 12:09:51.716925  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 47/120
	I1028 12:09:52.718396  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 48/120
	I1028 12:09:53.720220  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 49/120
	I1028 12:09:54.722080  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 50/120
	I1028 12:09:55.723488  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 51/120
	I1028 12:09:56.724975  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 52/120
	I1028 12:09:57.726385  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 53/120
	I1028 12:09:58.728077  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 54/120
	I1028 12:09:59.730343  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 55/120
	I1028 12:10:00.732340  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 56/120
	I1028 12:10:01.733979  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 57/120
	I1028 12:10:02.735330  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 58/120
	I1028 12:10:03.737343  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 59/120
	I1028 12:10:04.739484  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 60/120
	I1028 12:10:05.741436  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 61/120
	I1028 12:10:06.743093  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 62/120
	I1028 12:10:07.744849  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 63/120
	I1028 12:10:08.746315  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 64/120
	I1028 12:10:09.747919  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 65/120
	I1028 12:10:10.749486  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 66/120
	I1028 12:10:11.751318  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 67/120
	I1028 12:10:12.752863  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 68/120
	I1028 12:10:13.754199  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 69/120
	I1028 12:10:14.756452  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 70/120
	I1028 12:10:15.757859  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 71/120
	I1028 12:10:16.760062  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 72/120
	I1028 12:10:17.761395  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 73/120
	I1028 12:10:18.762779  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 74/120
	I1028 12:10:19.764733  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 75/120
	I1028 12:10:20.766115  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 76/120
	I1028 12:10:21.767503  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 77/120
	I1028 12:10:22.769127  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 78/120
	I1028 12:10:23.770483  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 79/120
	I1028 12:10:24.772698  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 80/120
	I1028 12:10:25.774945  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 81/120
	I1028 12:10:26.776128  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 82/120
	I1028 12:10:27.777784  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 83/120
	I1028 12:10:28.779122  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 84/120
	I1028 12:10:29.781117  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 85/120
	I1028 12:10:30.782493  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 86/120
	I1028 12:10:31.784260  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 87/120
	I1028 12:10:32.785726  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 88/120
	I1028 12:10:33.787789  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 89/120
	I1028 12:10:34.789186  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 90/120
	I1028 12:10:35.791002  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 91/120
	I1028 12:10:36.792496  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 92/120
	I1028 12:10:37.793939  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 93/120
	I1028 12:10:38.795416  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 94/120
	I1028 12:10:39.797948  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 95/120
	I1028 12:10:40.799267  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 96/120
	I1028 12:10:41.800788  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 97/120
	I1028 12:10:42.802176  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 98/120
	I1028 12:10:43.804047  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 99/120
	I1028 12:10:44.806296  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 100/120
	I1028 12:10:45.807855  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 101/120
	I1028 12:10:46.809308  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 102/120
	I1028 12:10:47.810735  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 103/120
	I1028 12:10:48.812327  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 104/120
	I1028 12:10:49.814462  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 105/120
	I1028 12:10:50.815780  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 106/120
	I1028 12:10:51.817407  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 107/120
	I1028 12:10:52.818855  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 108/120
	I1028 12:10:53.820462  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 109/120
	I1028 12:10:54.822702  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 110/120
	I1028 12:10:55.824377  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 111/120
	I1028 12:10:56.825737  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 112/120
	I1028 12:10:57.827216  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 113/120
	I1028 12:10:58.828597  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 114/120
	I1028 12:10:59.830647  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 115/120
	I1028 12:11:00.832148  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 116/120
	I1028 12:11:01.833843  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 117/120
	I1028 12:11:02.835405  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 118/120
	I1028 12:11:03.836847  184798 main.go:141] libmachine: (embed-certs-709250) Waiting for machine to stop 119/120
	I1028 12:11:04.838233  184798 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:11:04.838297  184798 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:11:04.840561  184798 out.go:201] 
	W1028 12:11:04.842165  184798 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:11:04.842190  184798 out.go:270] * 
	* 
	W1028 12:11:04.844755  184798 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:11:04.846116  184798 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-709250 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250: exit status 3 (18.645758714s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:11:23.493906  185717 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	E1028 12:11:23.493935  185717 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-709250" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-089993 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-089993 create -f testdata/busybox.yaml: exit status 1 (43.974806ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-089993" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-089993 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 6 (231.987038ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:10:13.394716  185196 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089993" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 6 (241.326876ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:10:13.638636  185242 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089993" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-089993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-089993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.860206995s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-089993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-089993 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-089993 describe deploy/metrics-server -n kube-system: exit status 1 (45.176046ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-089993" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-089993 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 6 (232.298243ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:11:54.774329  186056 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-089993" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884: exit status 3 (3.168085587s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:10:27.013890  185365 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host
	E1028 12:10:27.013916  185365 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-871884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-871884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154495908s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-871884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884: exit status 3 (3.061483205s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:10:36.229956  185498 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host
	E1028 12:10:36.229983  185498 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.156:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-871884" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-349222 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-349222 --alsologtostderr -v=3: exit status 82 (2m0.520580159s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-349222"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:10:30.963083  185479 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:10:30.963190  185479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:10:30.963195  185479 out.go:358] Setting ErrFile to fd 2...
	I1028 12:10:30.963200  185479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:10:30.963385  185479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:10:30.963593  185479 out.go:352] Setting JSON to false
	I1028 12:10:30.963670  185479 mustload.go:65] Loading cluster: default-k8s-diff-port-349222
	I1028 12:10:30.964017  185479 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:10:30.964089  185479 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:10:30.964264  185479 mustload.go:65] Loading cluster: default-k8s-diff-port-349222
	I1028 12:10:30.964369  185479 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:10:30.964402  185479 stop.go:39] StopHost: default-k8s-diff-port-349222
	I1028 12:10:30.964756  185479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:10:30.964805  185479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:10:30.981155  185479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I1028 12:10:30.981765  185479 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:10:30.982341  185479 main.go:141] libmachine: Using API Version  1
	I1028 12:10:30.982361  185479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:10:30.982741  185479 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:10:30.985280  185479 out.go:177] * Stopping node "default-k8s-diff-port-349222"  ...
	I1028 12:10:30.986760  185479 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1028 12:10:30.986801  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:10:30.987110  185479 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1028 12:10:30.987141  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:10:30.990178  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:10:30.990583  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:09:04 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:10:30.990620  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:10:30.990741  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:10:30.990900  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:10:30.991025  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:10:30.991188  185479 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:10:31.081621  185479 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1028 12:10:31.146045  185479 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1028 12:10:31.223341  185479 main.go:141] libmachine: Stopping "default-k8s-diff-port-349222"...
	I1028 12:10:31.223380  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:10:31.225077  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Stop
	I1028 12:10:31.228884  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 0/120
	I1028 12:10:32.230173  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 1/120
	I1028 12:10:33.232190  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 2/120
	I1028 12:10:34.233949  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 3/120
	I1028 12:10:35.235329  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 4/120
	I1028 12:10:36.237546  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 5/120
	I1028 12:10:37.239135  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 6/120
	I1028 12:10:38.240587  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 7/120
	I1028 12:10:39.242147  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 8/120
	I1028 12:10:40.243564  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 9/120
	I1028 12:10:41.244996  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 10/120
	I1028 12:10:42.246399  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 11/120
	I1028 12:10:43.248014  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 12/120
	I1028 12:10:44.249342  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 13/120
	I1028 12:10:45.251176  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 14/120
	I1028 12:10:46.253436  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 15/120
	I1028 12:10:47.255037  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 16/120
	I1028 12:10:48.256532  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 17/120
	I1028 12:10:49.258193  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 18/120
	I1028 12:10:50.259634  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 19/120
	I1028 12:10:51.262233  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 20/120
	I1028 12:10:52.263774  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 21/120
	I1028 12:10:53.265156  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 22/120
	I1028 12:10:54.266596  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 23/120
	I1028 12:10:55.267958  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 24/120
	I1028 12:10:56.270058  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 25/120
	I1028 12:10:57.271455  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 26/120
	I1028 12:10:58.273176  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 27/120
	I1028 12:10:59.274658  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 28/120
	I1028 12:11:00.276094  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 29/120
	I1028 12:11:01.278465  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 30/120
	I1028 12:11:02.279974  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 31/120
	I1028 12:11:03.281511  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 32/120
	I1028 12:11:04.283425  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 33/120
	I1028 12:11:05.284808  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 34/120
	I1028 12:11:06.286726  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 35/120
	I1028 12:11:07.288331  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 36/120
	I1028 12:11:08.290141  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 37/120
	I1028 12:11:09.291504  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 38/120
	I1028 12:11:10.293142  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 39/120
	I1028 12:11:11.295496  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 40/120
	I1028 12:11:12.296859  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 41/120
	I1028 12:11:13.298610  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 42/120
	I1028 12:11:14.300160  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 43/120
	I1028 12:11:15.301733  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 44/120
	I1028 12:11:16.303960  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 45/120
	I1028 12:11:17.305409  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 46/120
	I1028 12:11:18.306976  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 47/120
	I1028 12:11:19.308490  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 48/120
	I1028 12:11:20.309965  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 49/120
	I1028 12:11:21.311349  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 50/120
	I1028 12:11:22.312815  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 51/120
	I1028 12:11:23.314311  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 52/120
	I1028 12:11:24.315735  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 53/120
	I1028 12:11:25.317127  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 54/120
	I1028 12:11:26.319445  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 55/120
	I1028 12:11:27.320688  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 56/120
	I1028 12:11:28.322168  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 57/120
	I1028 12:11:29.323697  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 58/120
	I1028 12:11:30.325178  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 59/120
	I1028 12:11:31.327593  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 60/120
	I1028 12:11:32.329100  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 61/120
	I1028 12:11:33.330351  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 62/120
	I1028 12:11:34.331803  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 63/120
	I1028 12:11:35.333284  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 64/120
	I1028 12:11:36.335736  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 65/120
	I1028 12:11:37.337309  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 66/120
	I1028 12:11:38.339007  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 67/120
	I1028 12:11:39.340599  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 68/120
	I1028 12:11:40.342059  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 69/120
	I1028 12:11:41.343328  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 70/120
	I1028 12:11:42.344796  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 71/120
	I1028 12:11:43.346371  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 72/120
	I1028 12:11:44.348015  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 73/120
	I1028 12:11:45.349431  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 74/120
	I1028 12:11:46.351580  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 75/120
	I1028 12:11:47.352935  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 76/120
	I1028 12:11:48.354430  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 77/120
	I1028 12:11:49.355790  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 78/120
	I1028 12:11:50.357263  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 79/120
	I1028 12:11:51.359494  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 80/120
	I1028 12:11:52.361090  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 81/120
	I1028 12:11:53.362570  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 82/120
	I1028 12:11:54.364149  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 83/120
	I1028 12:11:55.365475  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 84/120
	I1028 12:11:56.367581  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 85/120
	I1028 12:11:57.368837  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 86/120
	I1028 12:11:58.370194  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 87/120
	I1028 12:11:59.371753  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 88/120
	I1028 12:12:00.373188  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 89/120
	I1028 12:12:01.375454  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 90/120
	I1028 12:12:02.377450  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 91/120
	I1028 12:12:03.378827  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 92/120
	I1028 12:12:04.380159  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 93/120
	I1028 12:12:05.381603  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 94/120
	I1028 12:12:06.383741  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 95/120
	I1028 12:12:07.385333  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 96/120
	I1028 12:12:08.386680  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 97/120
	I1028 12:12:09.388220  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 98/120
	I1028 12:12:10.389571  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 99/120
	I1028 12:12:11.391855  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 100/120
	I1028 12:12:12.393711  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 101/120
	I1028 12:12:13.394962  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 102/120
	I1028 12:12:14.396664  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 103/120
	I1028 12:12:15.397979  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 104/120
	I1028 12:12:16.400072  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 105/120
	I1028 12:12:17.401584  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 106/120
	I1028 12:12:18.403167  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 107/120
	I1028 12:12:19.404717  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 108/120
	I1028 12:12:20.406272  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 109/120
	I1028 12:12:21.408632  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 110/120
	I1028 12:12:22.410125  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 111/120
	I1028 12:12:23.411487  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 112/120
	I1028 12:12:24.413023  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 113/120
	I1028 12:12:25.414768  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 114/120
	I1028 12:12:26.417376  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 115/120
	I1028 12:12:27.418723  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 116/120
	I1028 12:12:28.420335  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 117/120
	I1028 12:12:29.421832  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 118/120
	I1028 12:12:30.423439  185479 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for machine to stop 119/120
	I1028 12:12:31.424633  185479 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1028 12:12:31.424709  185479 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1028 12:12:31.426931  185479 out.go:201] 
	W1028 12:12:31.428597  185479 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1028 12:12:31.428616  185479 out.go:270] * 
	* 
	W1028 12:12:31.431487  185479 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:12:31.433957  185479 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-349222 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
E1028 12:12:38.999198  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222: exit status 3 (18.58638776s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:12:50.021923  186335 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host
	E1028 12:12:50.021948  186335 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349222" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250: exit status 3 (3.167714273s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:11:26.661953  185814 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	E1028 12:11:26.661999  185814 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153093557s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-709250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250: exit status 3 (3.062780922s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:11:35.877926  185895 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	E1028 12:11:35.877978  185895 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-709250" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (727.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-089993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-089993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m3.701573371s)

                                                
                                                
-- stdout --
	* [old-k8s-version-089993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-089993" primary control-plane node in "old-k8s-version-089993" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:11:58.314137  186170 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:11:58.314257  186170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:11:58.314265  186170 out.go:358] Setting ErrFile to fd 2...
	I1028 12:11:58.314273  186170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:11:58.314454  186170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:11:58.315013  186170 out.go:352] Setting JSON to false
	I1028 12:11:58.315980  186170 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6861,"bootTime":1730110657,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:11:58.316092  186170 start.go:139] virtualization: kvm guest
	I1028 12:11:58.318337  186170 out.go:177] * [old-k8s-version-089993] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:11:58.319707  186170 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:11:58.319753  186170 notify.go:220] Checking for updates...
	I1028 12:11:58.322657  186170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:11:58.323976  186170 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:11:58.325298  186170 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:11:58.326767  186170 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:11:58.328235  186170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:11:58.330113  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:11:58.330758  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:11:58.330811  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:11:58.345963  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40815
	I1028 12:11:58.346387  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:11:58.346963  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:11:58.346989  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:11:58.347279  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:11:58.347463  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:11:58.349361  186170 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1028 12:11:58.350594  186170 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:11:58.350891  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:11:58.350924  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:11:58.365700  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I1028 12:11:58.366068  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:11:58.366494  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:11:58.366516  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:11:58.366795  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:11:58.366971  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:11:58.403452  186170 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:11:58.405085  186170 start.go:297] selected driver: kvm2
	I1028 12:11:58.405099  186170 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:11:58.405210  186170 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:11:58.405964  186170 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:11:58.406050  186170 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:11:58.421402  186170 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:11:58.421843  186170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:11:58.421878  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:11:58.421921  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:11:58.421958  186170 start.go:340] cluster config:
	{Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:11:58.422064  186170 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:11:58.424100  186170 out.go:177] * Starting "old-k8s-version-089993" primary control-plane node in "old-k8s-version-089993" cluster
	I1028 12:11:58.425441  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:11:58.425478  186170 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 12:11:58.425488  186170 cache.go:56] Caching tarball of preloaded images
	I1028 12:11:58.425623  186170 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:11:58.425637  186170 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1028 12:11:58.425735  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:11:58.425905  186170 start.go:360] acquireMachinesLock for old-k8s-version-089993: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	* 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	* 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-089993 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (246.569499ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-089993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-089993 logs -n 25: (1.581299194s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:13:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:13:02.452508  186547 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:13:02.452621  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452630  186547 out.go:358] Setting ErrFile to fd 2...
	I1028 12:13:02.452635  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452828  186547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:13:02.453378  186547 out.go:352] Setting JSON to false
	I1028 12:13:02.454320  186547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6925,"bootTime":1730110657,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:13:02.454420  186547 start.go:139] virtualization: kvm guest
	I1028 12:13:02.456754  186547 out.go:177] * [default-k8s-diff-port-349222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:13:02.458343  186547 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:13:02.458413  186547 notify.go:220] Checking for updates...
	I1028 12:13:02.460946  186547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:13:02.462089  186547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:13:02.463460  186547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:13:02.464649  186547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:13:02.466107  186547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:13:02.468142  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:13:02.468518  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.468587  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.483793  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1028 12:13:02.484273  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.484861  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.484884  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.485260  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.485471  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.485712  186547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:13:02.485997  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.486030  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.501110  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1028 12:13:02.501722  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.502335  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.502362  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.502682  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.502878  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.539766  186547 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:13:02.541024  186547 start.go:297] selected driver: kvm2
	I1028 12:13:02.541038  186547 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.541168  186547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:13:02.541929  186547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.542014  186547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:13:02.557443  186547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:13:02.557868  186547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:13:02.557902  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:13:02.557947  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:13:02.557987  186547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.558098  186547 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.560651  186547 out.go:177] * Starting "default-k8s-diff-port-349222" primary control-plane node in "default-k8s-diff-port-349222" cluster
	I1028 12:13:02.693744  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:02.561767  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:13:02.561800  186547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:13:02.561810  186547 cache.go:56] Caching tarball of preloaded images
	I1028 12:13:02.561877  186547 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:13:02.561887  186547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:13:02.561973  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:13:02.562165  186547 start.go:360] acquireMachinesLock for default-k8s-diff-port-349222: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:13:08.773770  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:11.845825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:17.925957  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:20.997733  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:27.077858  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:30.149737  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:36.229851  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:39.301764  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:45.381781  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:48.453767  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:54.533793  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:57.605754  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:03.685848  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:06.757874  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:12.837829  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:15.909778  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:21.989850  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:25.061812  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:31.141825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:34.213757  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:40.293844  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:43.365865  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:49.445872  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:52.517750  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:58.597834  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:01.669837  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:07.749853  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:10.821838  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:13.826298  185942 start.go:364] duration metric: took 3m37.788021766s to acquireMachinesLock for "embed-certs-709250"
	I1028 12:15:13.826369  185942 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:13.826382  185942 fix.go:54] fixHost starting: 
	I1028 12:15:13.827047  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:13.827113  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:13.842889  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1028 12:15:13.843403  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:13.843915  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:15:13.843938  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:13.844374  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:13.844568  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:13.844733  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:15:13.846440  185942 fix.go:112] recreateIfNeeded on embed-certs-709250: state=Stopped err=<nil>
	I1028 12:15:13.846464  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	W1028 12:15:13.846629  185942 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:13.848878  185942 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709250" ...
	I1028 12:15:13.850607  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Start
	I1028 12:15:13.850800  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring networks are active...
	I1028 12:15:13.851930  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network default is active
	I1028 12:15:13.852331  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network mk-embed-certs-709250 is active
	I1028 12:15:13.852652  185942 main.go:141] libmachine: (embed-certs-709250) Getting domain xml...
	I1028 12:15:13.853394  185942 main.go:141] libmachine: (embed-certs-709250) Creating domain...
	I1028 12:15:15.098667  185942 main.go:141] libmachine: (embed-certs-709250) Waiting to get IP...
	I1028 12:15:15.099525  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.099919  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.099951  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.099877  187018 retry.go:31] will retry after 285.25732ms: waiting for machine to come up
	I1028 12:15:15.386531  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.386992  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.387023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.386921  187018 retry.go:31] will retry after 327.08041ms: waiting for machine to come up
	I1028 12:15:15.715435  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.715900  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.715928  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.715846  187018 retry.go:31] will retry after 443.083162ms: waiting for machine to come up
	I1028 12:15:13.823652  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:13.823724  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824056  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:15:13.824085  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824284  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:15:13.826158  185546 machine.go:96] duration metric: took 4m37.413470632s to provisionDockerMachine
	I1028 12:15:13.826202  185546 fix.go:56] duration metric: took 4m37.436313043s for fixHost
	I1028 12:15:13.826208  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 4m37.436350273s
	W1028 12:15:13.826226  185546 start.go:714] error starting host: provision: host is not running
	W1028 12:15:13.826336  185546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 12:15:13.826346  185546 start.go:729] Will try again in 5 seconds ...
	I1028 12:15:16.160595  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.161024  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.161045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.161003  187018 retry.go:31] will retry after 599.535995ms: waiting for machine to come up
	I1028 12:15:16.761771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.762167  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.762213  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.762114  187018 retry.go:31] will retry after 527.275961ms: waiting for machine to come up
	I1028 12:15:17.290788  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:17.291124  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:17.291145  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:17.291098  187018 retry.go:31] will retry after 858.175967ms: waiting for machine to come up
	I1028 12:15:18.150644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.151045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.151071  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.150996  187018 retry.go:31] will retry after 727.962346ms: waiting for machine to come up
	I1028 12:15:18.880545  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.880990  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.881020  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.880942  187018 retry.go:31] will retry after 1.184956373s: waiting for machine to come up
	I1028 12:15:20.067178  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:20.067603  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:20.067635  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:20.067553  187018 retry.go:31] will retry after 1.635056202s: waiting for machine to come up
	I1028 12:15:18.827987  185546 start.go:360] acquireMachinesLock for no-preload-871884: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:21.703941  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:21.704338  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:21.704365  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:21.704302  187018 retry.go:31] will retry after 1.865473383s: waiting for machine to come up
	I1028 12:15:23.572362  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:23.572816  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:23.572843  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:23.572773  187018 retry.go:31] will retry after 2.604970031s: waiting for machine to come up
	I1028 12:15:26.181289  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:26.181849  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:26.181880  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:26.181788  187018 retry.go:31] will retry after 2.866004055s: waiting for machine to come up
	I1028 12:15:29.049604  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:29.050029  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:29.050068  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:29.049970  187018 retry.go:31] will retry after 3.046879869s: waiting for machine to come up
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:32.100252  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.100812  185942 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:15:32.100831  185942 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:15:32.100842  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.101552  185942 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:15:32.101568  185942 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:15:32.101602  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.101629  185942 main.go:141] libmachine: (embed-certs-709250) DBG | skip adding static IP to network mk-embed-certs-709250 - found existing host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"}
	I1028 12:15:32.101644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:15:32.104041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.104356  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104459  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:15:32.104488  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:15:32.104519  185942 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:32.104530  185942 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:15:32.104538  185942 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:15:32.233966  185942 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:32.234363  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:15:32.235010  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.237443  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.237755  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.237783  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.238040  185942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:15:32.238272  185942 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:32.238291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:32.238541  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.240765  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241165  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.241212  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241333  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.241520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241704  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241836  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.241989  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.242234  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.242247  185942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:32.358412  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:32.358443  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.358773  185942 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:15:32.358810  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.359027  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.361776  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362122  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.362161  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362262  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.362429  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362579  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362709  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.362867  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.363084  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.363098  185942 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:15:32.492437  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:15:32.492466  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.495108  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495394  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.495438  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495586  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.495771  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.495927  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.496054  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.496215  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.496399  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.496416  185942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:32.619038  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:32.619074  185942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:32.619113  185942 buildroot.go:174] setting up certificates
	I1028 12:15:32.619125  185942 provision.go:84] configureAuth start
	I1028 12:15:32.619137  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.619451  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.622055  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622448  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.622479  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622593  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.624610  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625037  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.625066  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625086  185942 provision.go:143] copyHostCerts
	I1028 12:15:32.625174  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:32.625190  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:32.625259  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:32.625396  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:32.625407  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:32.625444  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:32.625519  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:32.625541  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:32.625575  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:32.625645  185942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:15:32.684483  185942 provision.go:177] copyRemoteCerts
	I1028 12:15:32.684547  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:32.684576  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.686926  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687244  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.687284  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687427  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.687617  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.687744  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.687890  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:32.776282  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:32.802180  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:15:32.829609  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:32.854274  185942 provision.go:87] duration metric: took 235.133526ms to configureAuth
	I1028 12:15:32.854305  185942 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:32.854474  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:15:32.854547  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.857363  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.857736  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.857771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.858038  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.858251  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858442  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858652  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.858809  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.858979  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.858996  185942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:33.101841  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:33.101870  185942 machine.go:96] duration metric: took 863.584969ms to provisionDockerMachine
	I1028 12:15:33.101883  185942 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:15:33.101896  185942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:33.101911  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.102249  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:33.102285  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.105023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.105357  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105493  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.105710  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.105881  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.106032  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.193225  185942 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:33.197548  185942 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:33.197570  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:33.197637  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:33.197739  185942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:33.197861  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:33.207962  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:33.231808  185942 start.go:296] duration metric: took 129.908529ms for postStartSetup
	I1028 12:15:33.231853  185942 fix.go:56] duration metric: took 19.405472942s for fixHost
	I1028 12:15:33.231875  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.234609  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.234943  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.234979  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.235167  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.235370  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235642  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.235806  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:33.236026  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:33.236041  185942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:33.350639  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117733.322211717
	
	I1028 12:15:33.350663  185942 fix.go:216] guest clock: 1730117733.322211717
	I1028 12:15:33.350673  185942 fix.go:229] Guest: 2024-10-28 12:15:33.322211717 +0000 UTC Remote: 2024-10-28 12:15:33.231858201 +0000 UTC m=+237.345598419 (delta=90.353516ms)
	I1028 12:15:33.350707  185942 fix.go:200] guest clock delta is within tolerance: 90.353516ms
	I1028 12:15:33.350714  185942 start.go:83] releasing machines lock for "embed-certs-709250", held for 19.524379046s
	I1028 12:15:33.350737  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.350974  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:33.353647  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354012  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.354041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354244  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354690  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354873  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354973  185942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:33.355017  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.355090  185942 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:33.355116  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.357679  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358050  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358074  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358242  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358389  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.358542  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.358584  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358616  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358681  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.358721  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358892  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.359048  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.359197  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.443468  185942 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:33.498501  185942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:33.642221  185942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:33.649269  185942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:33.649336  185942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:33.665990  185942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:33.666023  185942 start.go:495] detecting cgroup driver to use...
	I1028 12:15:33.666103  185942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:33.683188  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:33.699441  185942 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:33.699506  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:33.714192  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:33.728325  185942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:33.850801  185942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:34.028929  185942 docker.go:233] disabling docker service ...
	I1028 12:15:34.028991  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:34.045600  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:34.059450  185942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:34.182310  185942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:34.305346  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:34.322354  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:34.342738  185942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:15:34.342804  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.354622  185942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:34.354687  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.365663  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.376503  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.388360  185942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:34.399960  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.419087  185942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.439700  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.451425  185942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:34.461657  185942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:34.461710  185942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:34.476292  185942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:34.487186  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:34.614984  185942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:34.709983  185942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:34.710061  185942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:34.715204  185942 start.go:563] Will wait 60s for crictl version
	I1028 12:15:34.715268  185942 ssh_runner.go:195] Run: which crictl
	I1028 12:15:34.719459  185942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:34.760330  185942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:34.760407  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.788635  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.820113  185942 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:15:34.821282  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:34.824384  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.824719  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:34.824746  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.825032  185942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:34.829502  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:34.842695  185942 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:34.842845  185942 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:15:34.842897  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:34.881154  185942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:15:34.881218  185942 ssh_runner.go:195] Run: which lz4
	I1028 12:15:34.885630  185942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:34.890045  185942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:34.890075  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:36.448704  185942 crio.go:462] duration metric: took 1.563096609s to copy over tarball
	I1028 12:15:36.448792  185942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:38.703177  185942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25435315s)
	I1028 12:15:38.703207  185942 crio.go:469] duration metric: took 2.254465841s to extract the tarball
	I1028 12:15:38.703217  185942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:38.741005  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:38.788350  185942 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:15:38.788376  185942 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:15:38.788383  185942 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:15:38.788491  185942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:15:38.788558  185942 ssh_runner.go:195] Run: crio config
	I1028 12:15:38.835642  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:38.835667  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:38.835678  185942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:15:38.835701  185942 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:15:38.835822  185942 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:15:38.835879  185942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:15:38.846832  185942 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:15:38.846925  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:15:38.857103  185942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:15:38.874531  185942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:15:38.892213  185942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:15:38.910949  185942 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:15:38.915391  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:38.928874  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:39.045969  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:15:39.063425  185942 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:15:39.063449  185942 certs.go:194] generating shared ca certs ...
	I1028 12:15:39.063465  185942 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:15:39.063638  185942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:15:39.063693  185942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:15:39.063709  185942 certs.go:256] generating profile certs ...
	I1028 12:15:39.063810  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:15:39.063893  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:15:39.063951  185942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:15:39.064107  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:15:39.064153  185942 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:15:39.064167  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:15:39.064202  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:15:39.064239  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:15:39.064272  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:15:39.064335  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:39.064972  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:15:39.103261  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:15:39.145102  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:15:39.175151  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:15:39.205220  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:15:39.236045  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:15:39.273622  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:15:39.299336  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:15:39.325277  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:15:39.349878  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:15:39.374466  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:15:39.398920  185942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:15:39.416280  185942 ssh_runner.go:195] Run: openssl version
	I1028 12:15:39.422478  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:15:39.434671  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439581  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439635  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.445736  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:15:39.457128  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:15:39.468602  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473229  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473306  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.479063  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:15:39.490370  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:15:39.501843  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506514  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506579  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.512633  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:15:39.524115  185942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:15:39.528804  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:15:39.534982  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:15:39.541214  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:15:39.547734  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:15:39.554143  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:15:39.560719  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:15:39.567076  185942 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:15:39.567173  185942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:15:39.567226  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.611567  185942 cri.go:89] found id: ""
	I1028 12:15:39.611644  185942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:15:39.622561  185942 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:15:39.622583  185942 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:15:39.622637  185942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:15:39.632757  185942 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:15:39.633873  185942 kubeconfig.go:125] found "embed-certs-709250" server: "https://192.168.39.211:8443"
	I1028 12:15:39.635943  185942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:15:39.646060  185942 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1028 12:15:39.646104  185942 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:15:39.646119  185942 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:15:39.646177  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.686806  185942 cri.go:89] found id: ""
	I1028 12:15:39.686891  185942 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:15:39.703935  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:15:39.714319  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:15:39.714341  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:15:39.714389  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:15:39.725383  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:15:39.725452  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:15:39.737075  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:15:39.748226  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:15:39.748311  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:15:39.760111  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.770287  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:15:39.770365  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.780776  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:15:39.790412  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:15:39.790475  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:15:39.800727  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:15:39.811331  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:39.926791  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:41.043020  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11617977s)
	I1028 12:15:41.043082  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.246311  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.309073  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.392313  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:15:41.392425  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:41.893601  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.393518  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.444753  185942 api_server.go:72] duration metric: took 1.052438751s to wait for apiserver process to appear ...
	I1028 12:15:42.444794  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:15:42.444821  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.214786  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.214821  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.214837  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.252422  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.252458  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.445825  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.451454  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.451549  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:45.945668  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.956623  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.956667  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.445240  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.450197  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:46.450223  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.945901  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.950302  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:15:46.956218  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:15:46.956245  185942 api_server.go:131] duration metric: took 4.511443878s to wait for apiserver health ...
	I1028 12:15:46.956254  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:46.956260  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:46.958294  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:46.959738  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:15:46.970473  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:15:46.994129  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:15:47.003807  185942 system_pods.go:59] 8 kube-system pods found
	I1028 12:15:47.003843  185942 system_pods.go:61] "coredns-7c65d6cfc9-j66cd" [d53b2839-00f6-4ccc-833d-76424b3efdba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:15:47.003851  185942 system_pods.go:61] "etcd-embed-certs-709250" [24761127-dde4-4f5d-b7cf-a13e37366e0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:15:47.003858  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [17996153-32c3-41e0-be90-fc9e058e0080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:15:47.003864  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [4ce37c00-1015-45f8-b847-1ca92cdf3a31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:15:47.003871  185942 system_pods.go:61] "kube-proxy-dl7xq" [a06ed5ff-b1e9-42c7-ba26-f120bb03ccb6] Running
	I1028 12:15:47.003877  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [c76e654e-a7fc-4891-8e73-bd74f9178c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:15:47.003883  185942 system_pods.go:61] "metrics-server-6867b74b74-k69kz" [568d5308-3f66-459b-b5c8-594d9400b6c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:15:47.003886  185942 system_pods.go:61] "storage-provisioner" [6552cef1-21b6-4306-a3e2-ff16793257dc] Running
	I1028 12:15:47.003893  185942 system_pods.go:74] duration metric: took 9.734271ms to wait for pod list to return data ...
	I1028 12:15:47.003900  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:15:47.008428  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:15:47.008465  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:15:47.008479  185942 node_conditions.go:105] duration metric: took 4.573275ms to run NodePressure ...
	I1028 12:15:47.008504  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:47.285509  185942 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291045  185942 kubeadm.go:739] kubelet initialised
	I1028 12:15:47.291069  185942 kubeadm.go:740] duration metric: took 5.521713ms waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291078  185942 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:15:47.299072  185942 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:49.312365  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:50.804953  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:50.804976  185942 pod_ready.go:82] duration metric: took 3.505873868s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:50.804986  185942 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378422  186547 start.go:364] duration metric: took 2m50.816229865s to acquireMachinesLock for "default-k8s-diff-port-349222"
	I1028 12:15:53.378482  186547 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:53.378491  186547 fix.go:54] fixHost starting: 
	I1028 12:15:53.378917  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:53.378971  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:53.395967  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I1028 12:15:53.396434  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:53.396923  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:15:53.396950  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:53.397332  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:53.397552  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:15:53.397726  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:15:53.399287  186547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349222: state=Stopped err=<nil>
	I1028 12:15:53.399337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	W1028 12:15:53.399468  186547 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:53.401446  186547 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-349222" ...
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:52.838774  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.811974  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:53.811997  185942 pod_ready.go:82] duration metric: took 3.00700476s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:53.812008  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:55.824400  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.402920  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Start
	I1028 12:15:53.403172  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring networks are active...
	I1028 12:15:53.403912  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network default is active
	I1028 12:15:53.404195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network mk-default-k8s-diff-port-349222 is active
	I1028 12:15:53.404615  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Getting domain xml...
	I1028 12:15:53.405554  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Creating domain...
	I1028 12:15:54.734540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting to get IP...
	I1028 12:15:54.735417  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735784  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735880  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:54.735759  187305 retry.go:31] will retry after 268.036011ms: waiting for machine to come up
	I1028 12:15:55.005376  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.005999  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.006032  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.005930  187305 retry.go:31] will retry after 255.477665ms: waiting for machine to come up
	I1028 12:15:55.263500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264118  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264153  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.264087  187305 retry.go:31] will retry after 354.942061ms: waiting for machine to come up
	I1028 12:15:55.620877  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621664  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621698  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.621610  187305 retry.go:31] will retry after 569.620755ms: waiting for machine to come up
	I1028 12:15:56.192393  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192872  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.192803  187305 retry.go:31] will retry after 703.637263ms: waiting for machine to come up
	I1028 12:15:56.897762  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898304  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.898214  187305 retry.go:31] will retry after 713.628482ms: waiting for machine to come up
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:58.321600  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:00.018244  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.018285  185942 pod_ready.go:82] duration metric: took 6.206271068s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.018297  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028610  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.028638  185942 pod_ready.go:82] duration metric: took 10.334289ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028653  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041057  185942 pod_ready.go:93] pod "kube-proxy-dl7xq" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.041091  185942 pod_ready.go:82] duration metric: took 12.429027ms for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041106  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049617  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.049645  185942 pod_ready.go:82] duration metric: took 8.529436ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049659  185942 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:57.613338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613844  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613873  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:57.613796  187305 retry.go:31] will retry after 1.188479203s: waiting for machine to come up
	I1028 12:15:58.803300  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803690  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803724  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:58.803650  187305 retry.go:31] will retry after 1.439057212s: waiting for machine to come up
	I1028 12:16:00.244665  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245201  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245239  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:00.245141  187305 retry.go:31] will retry after 1.842038011s: waiting for machine to come up
	I1028 12:16:02.090283  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090879  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:02.090828  187305 retry.go:31] will retry after 1.556155538s: waiting for machine to come up
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.165378  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:04.555857  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:03.648337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648760  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:03.648736  187305 retry.go:31] will retry after 2.586516153s: waiting for machine to come up
	I1028 12:16:06.236934  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237402  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237433  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:06.237352  187305 retry.go:31] will retry after 3.507901898s: waiting for machine to come up
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.557581  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.056276  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.746980  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747449  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747482  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:09.747401  187305 retry.go:31] will retry after 4.499585546s: waiting for machine to come up
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.487114  185546 start.go:364] duration metric: took 56.6590668s to acquireMachinesLock for "no-preload-871884"
	I1028 12:16:15.487176  185546 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:16:15.487190  185546 fix.go:54] fixHost starting: 
	I1028 12:16:15.487650  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:16:15.487713  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:16:15.508857  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1028 12:16:15.509318  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:16:15.510000  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:16:15.510037  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:16:15.510385  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:16:15.510599  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:15.510779  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:16:15.512738  185546 fix.go:112] recreateIfNeeded on no-preload-871884: state=Stopped err=<nil>
	I1028 12:16:15.512772  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	W1028 12:16:15.512963  185546 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:16:15.514890  185546 out.go:177] * Restarting existing kvm2 VM for "no-preload-871884" ...
	I1028 12:16:11.056427  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:13.058549  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.556621  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.516551  185546 main.go:141] libmachine: (no-preload-871884) Calling .Start
	I1028 12:16:15.516786  185546 main.go:141] libmachine: (no-preload-871884) Ensuring networks are active...
	I1028 12:16:15.517934  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network default is active
	I1028 12:16:15.518543  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network mk-no-preload-871884 is active
	I1028 12:16:15.519028  185546 main.go:141] libmachine: (no-preload-871884) Getting domain xml...
	I1028 12:16:15.519878  185546 main.go:141] libmachine: (no-preload-871884) Creating domain...
	I1028 12:16:14.249128  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249645  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has current primary IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249674  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Found IP for machine: 192.168.50.75
	I1028 12:16:14.249689  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserving static IP address...
	I1028 12:16:14.250120  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserved static IP address: 192.168.50.75
	I1028 12:16:14.250139  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for SSH to be available...
	I1028 12:16:14.250164  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.250205  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | skip adding static IP to network mk-default-k8s-diff-port-349222 - found existing host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"}
	I1028 12:16:14.250222  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Getting to WaitForSSH function...
	I1028 12:16:14.252540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.252883  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.252926  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.253035  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH client type: external
	I1028 12:16:14.253075  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa (-rw-------)
	I1028 12:16:14.253100  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:14.253115  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | About to run SSH command:
	I1028 12:16:14.253129  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | exit 0
	I1028 12:16:14.373688  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:14.374101  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetConfigRaw
	I1028 12:16:14.374713  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.377338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.377824  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.377857  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.378094  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:16:14.378326  186547 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:14.378345  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:14.378556  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.380694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.380976  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.380992  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.381143  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.381356  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381521  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381678  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.381882  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.382107  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.382119  186547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:14.490030  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:14.490061  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490303  186547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-349222"
	I1028 12:16:14.490331  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490523  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.492989  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493395  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.493426  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493626  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.493794  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.493960  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.494104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.494258  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.494427  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.494439  186547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-349222 && echo "default-k8s-diff-port-349222" | sudo tee /etc/hostname
	I1028 12:16:14.604373  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-349222
	
	I1028 12:16:14.604405  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.607135  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607437  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.607465  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.607891  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608060  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608187  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.608353  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.608549  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.608569  186547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-349222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-349222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-349222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:14.714933  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:14.714963  186547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:14.714990  186547 buildroot.go:174] setting up certificates
	I1028 12:16:14.714998  186547 provision.go:84] configureAuth start
	I1028 12:16:14.715007  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.715321  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.718051  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.718406  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718504  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.720638  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.720945  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.720972  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.721127  186547 provision.go:143] copyHostCerts
	I1028 12:16:14.721198  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:14.721213  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:14.721283  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:14.721407  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:14.721417  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:14.721446  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:14.721522  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:14.721544  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:14.721571  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:14.721634  186547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-349222 san=[127.0.0.1 192.168.50.75 default-k8s-diff-port-349222 localhost minikube]
	I1028 12:16:14.854227  186547 provision.go:177] copyRemoteCerts
	I1028 12:16:14.854285  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:14.854314  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.857250  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857590  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.857620  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857897  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.858091  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.858286  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.858434  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:14.940752  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:14.967575  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 12:16:14.992693  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:16:15.017801  186547 provision.go:87] duration metric: took 302.790563ms to configureAuth
	I1028 12:16:15.017831  186547 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:15.018073  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:15.018168  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.021181  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.021574  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021719  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.021894  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022113  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022317  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.022564  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.022744  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.022761  186547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:15.257308  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:15.257339  186547 machine.go:96] duration metric: took 878.998573ms to provisionDockerMachine
	I1028 12:16:15.257350  186547 start.go:293] postStartSetup for "default-k8s-diff-port-349222" (driver="kvm2")
	I1028 12:16:15.257360  186547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:15.257378  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.257695  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:15.257721  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.260380  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260767  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.260795  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260990  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.261186  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.261370  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.261513  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.341376  186547 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:15.345736  186547 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:15.345760  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:15.345820  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:15.345891  186547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:15.345978  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:15.355662  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:15.381750  186547 start.go:296] duration metric: took 124.385481ms for postStartSetup
	I1028 12:16:15.381788  186547 fix.go:56] duration metric: took 22.00329785s for fixHost
	I1028 12:16:15.381807  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.384756  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385099  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.385130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385359  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.385587  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385782  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385974  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.386165  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.386345  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.386355  186547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:15.486905  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117775.445749296
	
	I1028 12:16:15.486934  186547 fix.go:216] guest clock: 1730117775.445749296
	I1028 12:16:15.486944  186547 fix.go:229] Guest: 2024-10-28 12:16:15.445749296 +0000 UTC Remote: 2024-10-28 12:16:15.381791731 +0000 UTC m=+192.967058764 (delta=63.957565ms)
	I1028 12:16:15.487005  186547 fix.go:200] guest clock delta is within tolerance: 63.957565ms
	I1028 12:16:15.487018  186547 start.go:83] releasing machines lock for "default-k8s-diff-port-349222", held for 22.108560462s
	I1028 12:16:15.487082  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.487382  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:15.490840  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491343  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.491374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491528  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492208  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492431  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492581  186547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:15.492657  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.492706  186547 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:15.492746  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.496062  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496119  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496544  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496901  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497225  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497257  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497458  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497583  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497665  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.497798  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497977  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.590741  186547 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:15.615347  186547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:15.762979  186547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:15.770132  186547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:15.770221  186547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:15.788651  186547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:15.788684  186547 start.go:495] detecting cgroup driver to use...
	I1028 12:16:15.788751  186547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:15.806118  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:15.820916  186547 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:15.820986  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:15.835770  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:15.850994  186547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:15.979465  186547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:16.160837  186547 docker.go:233] disabling docker service ...
	I1028 12:16:16.160924  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:16.177934  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:16.194616  186547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:16.320605  186547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:16.464175  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:16.479626  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:16.502747  186547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:16.502889  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.514636  186547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:16.514695  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.528137  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.539961  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.552263  186547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:16.566275  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.578632  186547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.599084  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.611250  186547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:16.621976  186547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:16.622052  186547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:16.640800  186547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:16.651767  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:16.806628  186547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:16.903584  186547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:16.903655  186547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:16.909873  186547 start.go:563] Will wait 60s for crictl version
	I1028 12:16:16.909950  186547 ssh_runner.go:195] Run: which crictl
	I1028 12:16:16.915388  186547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:16.964424  186547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:16.964517  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:16.997415  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:17.032323  186547 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:17.033747  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:17.036500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.036903  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:17.036935  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.037118  186547 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:17.041698  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:17.056649  186547 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:17.056792  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:17.056840  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:17.099143  186547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:17.099233  186547 ssh_runner.go:195] Run: which lz4
	I1028 12:16:17.103882  186547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:16:17.108660  186547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:16:17.108699  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.559178  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:19.560739  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:16.842086  185546 main.go:141] libmachine: (no-preload-871884) Waiting to get IP...
	I1028 12:16:16.843056  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:16.843514  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:16.843599  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:16.843484  187500 retry.go:31] will retry after 240.188984ms: waiting for machine to come up
	I1028 12:16:17.085193  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.085702  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.085739  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.085649  187500 retry.go:31] will retry after 361.44193ms: waiting for machine to come up
	I1028 12:16:17.448961  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.449619  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.449645  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.449576  187500 retry.go:31] will retry after 386.179326ms: waiting for machine to come up
	I1028 12:16:17.837097  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.837879  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.837907  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.837834  187500 retry.go:31] will retry after 531.12665ms: waiting for machine to come up
	I1028 12:16:18.370266  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:18.370803  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:18.370834  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:18.370746  187500 retry.go:31] will retry after 760.20134ms: waiting for machine to come up
	I1028 12:16:19.132853  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.133415  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.133444  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.133360  187500 retry.go:31] will retry after 817.773678ms: waiting for machine to come up
	I1028 12:16:19.952317  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.952800  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.952824  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.952760  187500 retry.go:31] will retry after 861.798605ms: waiting for machine to come up
	I1028 12:16:20.816156  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:20.816794  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:20.816826  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:20.816750  187500 retry.go:31] will retry after 908.062214ms: waiting for machine to come up
	I1028 12:16:18.686980  186547 crio.go:462] duration metric: took 1.583134893s to copy over tarball
	I1028 12:16:18.687053  186547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:16:21.016264  186547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329174428s)
	I1028 12:16:21.016309  186547 crio.go:469] duration metric: took 2.329300291s to extract the tarball
	I1028 12:16:21.016322  186547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:16:21.053950  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:21.112876  186547 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:16:21.112903  186547 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:16:21.112914  186547 kubeadm.go:934] updating node { 192.168.50.75 8444 v1.31.2 crio true true} ...
	I1028 12:16:21.113037  186547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-349222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:21.113119  186547 ssh_runner.go:195] Run: crio config
	I1028 12:16:21.179853  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:21.179877  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:21.179888  186547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:21.179907  186547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-349222 NodeName:default-k8s-diff-port-349222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:21.180039  186547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-349222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.75"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:21.180117  186547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:21.191650  186547 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:21.191721  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:21.201670  186547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 12:16:21.220426  186547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:21.240774  186547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:16:21.263336  186547 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:21.267818  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:21.281577  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:21.441517  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:21.464117  186547 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222 for IP: 192.168.50.75
	I1028 12:16:21.464145  186547 certs.go:194] generating shared ca certs ...
	I1028 12:16:21.464167  186547 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:21.464392  186547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:21.464458  186547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:21.464485  186547 certs.go:256] generating profile certs ...
	I1028 12:16:21.464599  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.key
	I1028 12:16:21.464691  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key.e54e33e0
	I1028 12:16:21.464749  186547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key
	I1028 12:16:21.464919  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:21.464967  186547 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:21.464981  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:21.465006  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:21.465031  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:21.465069  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:21.465124  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:21.465976  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:21.511145  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:21.572071  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:21.613442  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:21.655508  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:16:21.687378  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:16:21.713227  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:21.738909  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:21.765274  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:21.792427  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:21.817632  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:21.842996  186547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:21.861059  186547 ssh_runner.go:195] Run: openssl version
	I1028 12:16:21.867814  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:21.880769  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886245  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886325  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.893179  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:21.908974  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:21.926992  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932350  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932428  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.939073  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:21.952302  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:21.965485  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971486  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971564  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.978531  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:21.995399  186547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:22.001453  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:22.009449  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:22.016898  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:22.024410  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:22.033151  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:22.040981  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:22.048298  186547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:22.048441  186547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:22.048531  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.095210  186547 cri.go:89] found id: ""
	I1028 12:16:22.095319  186547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:22.111740  186547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:22.111772  186547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:22.111828  186547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:22.122472  186547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:22.123648  186547 kubeconfig.go:125] found "default-k8s-diff-port-349222" server: "https://192.168.50.75:8444"
	I1028 12:16:22.126117  186547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:22.137057  186547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.75
	I1028 12:16:22.137096  186547 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:22.137108  186547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:22.137179  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.180526  186547 cri.go:89] found id: ""
	I1028 12:16:22.180638  186547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:22.197697  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:22.208176  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:22.208197  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:22.208246  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:16:22.218379  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:22.218438  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:22.228844  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:16:22.239330  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:22.239407  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:22.250200  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.260309  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:22.260374  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.271041  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:16:22.281556  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:22.281637  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:22.294003  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:22.305123  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:22.426791  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.058068  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:24.059822  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:21.726767  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:21.727332  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:21.727373  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:21.727224  187500 retry.go:31] will retry after 1.684184533s: waiting for machine to come up
	I1028 12:16:23.412691  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:23.413228  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:23.413254  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:23.413177  187500 retry.go:31] will retry after 1.416062445s: waiting for machine to come up
	I1028 12:16:24.830846  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:24.831450  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:24.831480  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:24.831393  187500 retry.go:31] will retry after 2.716897952s: waiting for machine to come up
	I1028 12:16:23.288371  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.506229  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.575063  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.644776  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:23.644896  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.145579  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.645050  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.666456  186547 api_server.go:72] duration metric: took 1.021679294s to wait for apiserver process to appear ...
	I1028 12:16:24.666493  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:24.666518  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:24.667086  186547 api_server.go:269] stopped: https://192.168.50.75:8444/healthz: Get "https://192.168.50.75:8444/healthz": dial tcp 192.168.50.75:8444: connect: connection refused
	I1028 12:16:25.166765  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.336957  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.337000  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.337015  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.382075  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.382110  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.667083  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.671910  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:28.671935  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.167591  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.173364  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:29.173397  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.666902  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.672205  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:16:29.679964  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:16:29.680002  186547 api_server.go:131] duration metric: took 5.013500479s to wait for apiserver health ...
	I1028 12:16:29.680014  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:29.680032  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:29.681992  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:26.558629  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.560116  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:27.550893  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:27.551454  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:27.551476  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:27.551438  187500 retry.go:31] will retry after 2.986712877s: waiting for machine to come up
	I1028 12:16:30.539999  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:30.540601  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:30.540632  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:30.540526  187500 retry.go:31] will retry after 3.947007446s: waiting for machine to come up
	I1028 12:16:29.683325  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:16:29.697362  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:16:29.717296  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:16:29.726327  186547 system_pods.go:59] 8 kube-system pods found
	I1028 12:16:29.726363  186547 system_pods.go:61] "coredns-7c65d6cfc9-k5h7n" [e203fcce-1a8a-431b-a816-d75b33ca9417] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:16:29.726374  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [2214daee-0302-44cd-9297-836eeb011232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:16:29.726391  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [c4331c24-07e2-4b50-ab04-31bcd00960e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:16:29.726402  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [9dddd9fb-ad03-4771-af1b-d9e1e024af52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:16:29.726413  186547 system_pods.go:61] "kube-proxy-bqq65" [ed5d0c14-0ddb-4446-a2f7-ae457d629fb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:16:29.726423  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [9cfcc366-038f-43a9-b919-48742fa419af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:16:29.726434  186547 system_pods.go:61] "metrics-server-6867b74b74-cgkz9" [3d919412-efb8-4030-a5d0-3c325c824c48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:16:29.726445  186547 system_pods.go:61] "storage-provisioner" [613b003c-1eee-4294-947f-ea7a21edc8d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:16:29.726464  186547 system_pods.go:74] duration metric: took 9.135782ms to wait for pod list to return data ...
	I1028 12:16:29.726478  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:16:29.729971  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:16:29.729996  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:16:29.730009  186547 node_conditions.go:105] duration metric: took 3.525858ms to run NodePressure ...
	I1028 12:16:29.730035  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:30.043775  186547 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048614  186547 kubeadm.go:739] kubelet initialised
	I1028 12:16:30.048638  186547 kubeadm.go:740] duration metric: took 4.83853ms waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048647  186547 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:16:30.053908  186547 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:32.063283  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.057577  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:33.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:35.557338  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:34.491658  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492175  185546 main.go:141] libmachine: (no-preload-871884) Found IP for machine: 192.168.72.156
	I1028 12:16:34.492197  185546 main.go:141] libmachine: (no-preload-871884) Reserving static IP address...
	I1028 12:16:34.492215  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has current primary IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492674  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.492704  185546 main.go:141] libmachine: (no-preload-871884) Reserved static IP address: 192.168.72.156
	I1028 12:16:34.492739  185546 main.go:141] libmachine: (no-preload-871884) DBG | skip adding static IP to network mk-no-preload-871884 - found existing host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"}
	I1028 12:16:34.492763  185546 main.go:141] libmachine: (no-preload-871884) DBG | Getting to WaitForSSH function...
	I1028 12:16:34.492777  185546 main.go:141] libmachine: (no-preload-871884) Waiting for SSH to be available...
	I1028 12:16:34.495176  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495502  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.495536  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495682  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH client type: external
	I1028 12:16:34.495714  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa (-rw-------)
	I1028 12:16:34.495747  185546 main.go:141] libmachine: (no-preload-871884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:34.495770  185546 main.go:141] libmachine: (no-preload-871884) DBG | About to run SSH command:
	I1028 12:16:34.495796  185546 main.go:141] libmachine: (no-preload-871884) DBG | exit 0
	I1028 12:16:34.625650  185546 main.go:141] libmachine: (no-preload-871884) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:34.625959  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetConfigRaw
	I1028 12:16:34.626602  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.629137  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629442  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.629477  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629733  185546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:16:34.629938  185546 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:34.629957  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:34.630153  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.632415  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.632777  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.632804  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.633033  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.633247  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633422  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633592  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.633762  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.633954  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.633968  185546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:34.738368  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:34.738406  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738696  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:16:34.738729  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738926  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.741750  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742216  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.742322  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742339  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.742538  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742689  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742857  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.743032  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.743248  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.743266  185546 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-871884 && echo "no-preload-871884" | sudo tee /etc/hostname
	I1028 12:16:34.863767  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-871884
	
	I1028 12:16:34.863802  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.867136  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867530  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.867561  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867822  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.868039  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868251  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868430  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.868634  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.868880  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.868905  185546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-871884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-871884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-871884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:34.989420  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:34.989450  185546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:34.989468  185546 buildroot.go:174] setting up certificates
	I1028 12:16:34.989476  185546 provision.go:84] configureAuth start
	I1028 12:16:34.989485  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.989790  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.992627  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.992977  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.993007  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.993225  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.995586  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.995888  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.995911  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.996122  185546 provision.go:143] copyHostCerts
	I1028 12:16:34.996190  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:34.996204  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:34.996261  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:34.996375  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:34.996384  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:34.996408  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:34.996472  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:34.996482  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:34.996499  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:34.996559  185546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.no-preload-871884 san=[127.0.0.1 192.168.72.156 localhost minikube no-preload-871884]
	I1028 12:16:35.437900  185546 provision.go:177] copyRemoteCerts
	I1028 12:16:35.437961  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:35.437985  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.440936  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441329  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.441361  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441555  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.441756  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.441921  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.442085  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.524911  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:35.554631  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:16:35.586946  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:16:35.620121  185546 provision.go:87] duration metric: took 630.630531ms to configureAuth
	I1028 12:16:35.620155  185546 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:35.620395  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:35.620502  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.623316  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623607  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.623643  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623886  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.624099  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624290  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624433  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.624612  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:35.624794  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:35.624810  185546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:35.886145  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:35.886178  185546 machine.go:96] duration metric: took 1.256224912s to provisionDockerMachine
	I1028 12:16:35.886196  185546 start.go:293] postStartSetup for "no-preload-871884" (driver="kvm2")
	I1028 12:16:35.886209  185546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:35.886232  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:35.886615  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:35.886653  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.889615  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890016  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.890048  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.890459  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.890654  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.890798  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.977889  185546 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:35.983360  185546 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:35.983387  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:35.983454  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:35.983543  185546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:35.983674  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:35.997400  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:36.025665  185546 start.go:296] duration metric: took 139.454088ms for postStartSetup
	I1028 12:16:36.025714  185546 fix.go:56] duration metric: took 20.538525254s for fixHost
	I1028 12:16:36.025739  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.028490  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.028933  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.028964  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.029170  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.029386  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029573  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029734  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.029909  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:36.030087  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:36.030098  185546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:36.138559  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117796.101397993
	
	I1028 12:16:36.138589  185546 fix.go:216] guest clock: 1730117796.101397993
	I1028 12:16:36.138599  185546 fix.go:229] Guest: 2024-10-28 12:16:36.101397993 +0000 UTC Remote: 2024-10-28 12:16:36.025719388 +0000 UTC m=+359.787107454 (delta=75.678605ms)
	I1028 12:16:36.138633  185546 fix.go:200] guest clock delta is within tolerance: 75.678605ms
	I1028 12:16:36.138638  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 20.651488254s
	I1028 12:16:36.138663  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.138953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:36.141711  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142144  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.142180  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142323  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.142975  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143165  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143240  185546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:36.143306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.143378  185546 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:36.143399  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.145980  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146166  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146348  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146375  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146507  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146617  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146657  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146701  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.146795  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146882  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.146953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.147013  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.147071  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.147202  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.223364  185546 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:36.246964  185546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:34.561016  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.564296  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.396734  185546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:36.403214  185546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:36.403298  185546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:36.421658  185546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:36.421695  185546 start.go:495] detecting cgroup driver to use...
	I1028 12:16:36.421772  185546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:36.441133  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:36.456750  185546 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:36.456806  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:36.473457  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:36.489210  185546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:36.621054  185546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:36.767341  185546 docker.go:233] disabling docker service ...
	I1028 12:16:36.767432  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:36.784655  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:36.799522  185546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:36.942312  185546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:37.066636  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:37.082284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:37.102462  185546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:37.102530  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.113687  185546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:37.113760  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.125624  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.137036  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.148417  185546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:37.160015  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.171382  185546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.192342  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.204353  185546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:37.215188  185546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:37.215275  185546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:37.230653  185546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:37.241484  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:37.382996  185546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:37.479263  185546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:37.479363  185546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:37.485265  185546 start.go:563] Will wait 60s for crictl version
	I1028 12:16:37.485330  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:37.489545  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:37.536126  185546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:37.536212  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.567538  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.600370  185546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.559329  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:40.057624  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:37.601686  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:37.604235  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604568  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:37.604601  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604782  185546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:37.609354  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:37.624966  185546 kubeadm.go:883] updating cluster {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:37.625081  185546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:37.625117  185546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:37.664112  185546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:37.664149  185546 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:16:37.664262  185546 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.664306  185546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.664334  185546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 12:16:37.664311  185546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.664352  185546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.664393  185546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.664434  185546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.664399  185546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666080  185546 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.666083  185546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.666081  185546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.666142  185546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.666085  185546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 12:16:37.666079  185546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.666185  185546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666398  185546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.840639  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.857089  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.859107  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 12:16:37.859358  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.863640  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.867925  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.876221  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.921581  185546 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 12:16:37.921638  185546 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.921689  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.042970  185546 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 12:16:38.043015  185546 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.043068  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093917  185546 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 12:16:38.093954  185546 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 12:16:38.093973  185546 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.093985  185546 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.094029  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094038  185546 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 12:16:38.094057  185546 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.094087  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.094094  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094030  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093976  185546 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 12:16:38.094143  185546 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.094152  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.094175  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.110134  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.110302  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.188922  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.188979  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.193920  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.193929  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.292698  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.325562  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.331855  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.332873  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.345880  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.345951  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.414842  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.470776  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.470949  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 12:16:38.471044  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.481197  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 12:16:38.481333  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:38.503147  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 12:16:38.503171  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:38.532884  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 12:16:38.533001  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:38.552405  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 12:16:38.552417  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 12:16:38.552472  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552485  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 12:16:38.552523  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:38.552529  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552552  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 12:16:38.552527  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 12:16:38.552598  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 12:16:38.829851  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127678  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.575124569s)
	I1028 12:16:41.127722  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 12:16:41.127744  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.575188461s)
	I1028 12:16:41.127775  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 12:16:41.127785  185546 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.297902587s)
	I1028 12:16:41.127803  185546 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127818  185546 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 12:16:41.127850  185546 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127858  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127895  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:39.064564  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:41.563643  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.058025  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:44.557594  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.190694  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062807881s)
	I1028 12:16:43.190736  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 12:16:43.190752  185546 ssh_runner.go:235] Completed: which crictl: (2.062836368s)
	I1028 12:16:43.190773  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:43.190827  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:43.190831  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:45.281583  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.090685426s)
	I1028 12:16:45.281620  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 12:16:45.281650  185546 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281679  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.090821035s)
	I1028 12:16:45.281698  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281750  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:45.325500  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:42.565395  186547 pod_ready.go:93] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.565425  186547 pod_ready.go:82] duration metric: took 12.511487215s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.565438  186547 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572364  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.572388  186547 pod_ready.go:82] duration metric: took 6.941356ms for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572402  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579074  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.579099  186547 pod_ready.go:82] duration metric: took 6.689137ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579116  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584088  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.584108  186547 pod_ready.go:82] duration metric: took 4.985095ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584118  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588810  186547 pod_ready.go:93] pod "kube-proxy-bqq65" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.588837  186547 pod_ready.go:82] duration metric: took 4.711896ms for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588849  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758349  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:43.758376  186547 pod_ready.go:82] duration metric: took 1.169519383s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758387  186547 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:45.766209  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.059150  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.556589  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.174287  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.84875195s)
	I1028 12:16:49.174340  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 12:16:49.174291  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.892568087s)
	I1028 12:16:49.174422  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 12:16:49.174427  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:49.174466  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:49.174524  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:48.265641  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:50.271513  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.557320  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.557540  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:51.438821  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.26426785s)
	I1028 12:16:51.438857  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 12:16:51.438890  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264449757s)
	I1028 12:16:51.438893  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:51.438911  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 12:16:51.438945  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:52.890902  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451935078s)
	I1028 12:16:52.890933  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 12:16:52.890960  185546 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:52.891010  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:53.643145  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 12:16:53.643208  185546 cache_images.go:123] Successfully loaded all cached images
	I1028 12:16:53.643216  185546 cache_images.go:92] duration metric: took 15.979050279s to LoadCachedImages
	I1028 12:16:53.643231  185546 kubeadm.go:934] updating node { 192.168.72.156 8443 v1.31.2 crio true true} ...
	I1028 12:16:53.643393  185546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-871884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:53.643480  185546 ssh_runner.go:195] Run: crio config
	I1028 12:16:53.701778  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:16:53.701805  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:53.701814  185546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:53.701836  185546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.156 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-871884 NodeName:no-preload-871884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:53.701952  185546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-871884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:53.702019  185546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:53.714245  185546 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:53.714327  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:53.725610  185546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 12:16:53.745071  185546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:53.766897  185546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:16:53.787043  185546 ssh_runner.go:195] Run: grep 192.168.72.156	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:53.791580  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:53.805088  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:53.945235  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:53.964073  185546 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884 for IP: 192.168.72.156
	I1028 12:16:53.964099  185546 certs.go:194] generating shared ca certs ...
	I1028 12:16:53.964115  185546 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:53.964290  185546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:53.964338  185546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:53.964355  185546 certs.go:256] generating profile certs ...
	I1028 12:16:53.964458  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.key
	I1028 12:16:53.964533  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key.6934b48e
	I1028 12:16:53.964584  185546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key
	I1028 12:16:53.964719  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:53.964750  185546 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:53.964765  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:53.964801  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:53.964831  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:53.964866  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:53.964921  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:53.965632  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:54.004592  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:54.044270  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:54.079496  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:54.114473  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:16:54.141836  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:54.175201  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:54.202282  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:54.227874  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:54.254818  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:54.282950  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:54.310204  185546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:54.328834  185546 ssh_runner.go:195] Run: openssl version
	I1028 12:16:54.335391  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:54.347474  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352687  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352755  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.358834  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:54.373155  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:54.387035  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392179  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392281  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.398488  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:54.412352  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:54.426361  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431415  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431470  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.437583  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:54.450708  185546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:54.456625  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:54.463458  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:54.469939  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:54.477873  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:54.484962  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:54.491679  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:54.498106  185546 kubeadm.go:392] StartCluster: {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:54.498211  185546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:54.498287  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.543142  185546 cri.go:89] found id: ""
	I1028 12:16:54.543250  185546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:54.555948  185546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:54.555971  185546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:54.556021  185546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:54.566954  185546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:54.567990  185546 kubeconfig.go:125] found "no-preload-871884" server: "https://192.168.72.156:8443"
	I1028 12:16:54.570149  185546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:54.581005  185546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.156
	I1028 12:16:54.581039  185546 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:54.581051  185546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:54.581100  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.622676  185546 cri.go:89] found id: ""
	I1028 12:16:54.622742  185546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:54.642427  185546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:54.655104  185546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:54.655131  185546 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:54.655199  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:54.665367  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:54.665432  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:54.675664  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:54.685921  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:54.685997  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:54.698451  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.709982  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:54.710060  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.721243  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:54.731699  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:54.731780  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:54.743365  185546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:54.754284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:54.868055  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.645470  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.858805  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.940632  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:56.020654  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:56.020735  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.764963  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:54.766822  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.768500  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.058614  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.556085  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:00.556460  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.521589  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.021710  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.066266  185546 api_server.go:72] duration metric: took 1.045608096s to wait for apiserver process to appear ...
	I1028 12:16:57.066305  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:57.066326  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:16:57.066862  185546 api_server.go:269] stopped: https://192.168.72.156:8443/healthz: Get "https://192.168.72.156:8443/healthz": dial tcp 192.168.72.156:8443: connect: connection refused
	I1028 12:16:57.567124  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.159147  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.159179  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.159193  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.171505  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.171530  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.566560  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.570920  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:00.570947  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.066537  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.071173  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.071205  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.566517  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.577822  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.577851  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:02.066514  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:02.071117  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:17:02.078265  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:17:02.078293  185546 api_server.go:131] duration metric: took 5.011981306s to wait for apiserver health ...
	I1028 12:17:02.078302  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:17:02.078308  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:17:02.080348  185546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:59.267565  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:01.766399  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.081626  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:17:02.103809  185546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:17:02.135225  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:17:02.152051  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:17:02.152102  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:17:02.152113  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:17:02.152125  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:17:02.152133  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:17:02.152146  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:17:02.152159  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:17:02.152167  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:17:02.152174  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:17:02.152183  185546 system_pods.go:74] duration metric: took 16.930389ms to wait for pod list to return data ...
	I1028 12:17:02.152192  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:17:02.157475  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:17:02.157504  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:17:02.157515  185546 node_conditions.go:105] duration metric: took 5.317861ms to run NodePressure ...
	I1028 12:17:02.157548  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:17:02.476553  185546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482764  185546 kubeadm.go:739] kubelet initialised
	I1028 12:17:02.482789  185546 kubeadm.go:740] duration metric: took 6.205425ms waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482798  185546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:02.487480  185546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.495454  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495482  185546 pod_ready.go:82] duration metric: took 7.976331ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.495495  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495505  185546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.499904  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499931  185546 pod_ready.go:82] duration metric: took 4.41555ms for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.499941  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499948  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.504272  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504300  185546 pod_ready.go:82] duration metric: took 4.345522ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.504325  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504337  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.538786  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538826  185546 pod_ready.go:82] duration metric: took 34.474629ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.538841  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538851  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.939462  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939490  185546 pod_ready.go:82] duration metric: took 400.627739ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.939502  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939511  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.339338  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339369  185546 pod_ready.go:82] duration metric: took 399.848996ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.339384  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339394  185546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.739585  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739640  185546 pod_ready.go:82] duration metric: took 400.235271ms for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.739655  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739665  185546 pod_ready.go:39] duration metric: took 1.256859696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:03.739682  185546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:17:03.755064  185546 ops.go:34] apiserver oom_adj: -16
	I1028 12:17:03.755086  185546 kubeadm.go:597] duration metric: took 9.199108841s to restartPrimaryControlPlane
	I1028 12:17:03.755096  185546 kubeadm.go:394] duration metric: took 9.256999682s to StartCluster
	I1028 12:17:03.755111  185546 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.755175  185546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:17:03.757048  185546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.757327  185546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:17:03.757425  185546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:17:03.757535  185546 addons.go:69] Setting storage-provisioner=true in profile "no-preload-871884"
	I1028 12:17:03.757563  185546 addons.go:234] Setting addon storage-provisioner=true in "no-preload-871884"
	I1028 12:17:03.757565  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:17:03.757589  185546 addons.go:69] Setting metrics-server=true in profile "no-preload-871884"
	I1028 12:17:03.757617  185546 addons.go:234] Setting addon metrics-server=true in "no-preload-871884"
	I1028 12:17:03.757568  185546 addons.go:69] Setting default-storageclass=true in profile "no-preload-871884"
	W1028 12:17:03.757626  185546 addons.go:243] addon metrics-server should already be in state true
	I1028 12:17:03.757635  185546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-871884"
	W1028 12:17:03.757573  185546 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:17:03.757669  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.757713  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.758051  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758093  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758196  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758233  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758231  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758355  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.759378  185546 out.go:177] * Verifying Kubernetes components...
	I1028 12:17:03.761108  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:17:03.786180  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1028 12:17:03.786344  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1028 12:17:03.787005  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787096  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.787658  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.788034  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.789126  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.789149  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.789333  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.789366  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.790199  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.790591  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.793866  185546 addons.go:234] Setting addon default-storageclass=true in "no-preload-871884"
	W1028 12:17:03.793890  185546 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:17:03.793920  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.794332  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.794384  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.806461  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I1028 12:17:03.806960  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.807572  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1028 12:17:03.807644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.807835  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808074  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.808188  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.808349  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.808603  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.808624  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808993  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.809610  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.809665  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.810531  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.812676  185546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:17:03.813307  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1028 12:17:03.813821  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.814228  185546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:03.814248  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:17:03.814266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.814350  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.814373  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.814848  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.815284  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.815323  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.817336  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817751  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.817776  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817889  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.818079  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.818219  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.818357  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.830425  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1028 12:17:03.830940  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.831486  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.831507  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.831905  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.832125  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.834275  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.835260  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1028 12:17:03.835687  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.836180  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.836200  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.836527  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.836604  185546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:17:03.836741  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.838273  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:17:03.838290  185546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:17:03.838306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.838508  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.839044  185546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:03.839060  185546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:17:03.839080  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.842836  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843272  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.843291  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843461  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.843598  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.843767  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.843774  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843909  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.844312  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.844330  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.845228  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.845354  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.845474  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.845623  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.981979  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:17:04.003932  185546 node_ready.go:35] waiting up to 6m0s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:04.071389  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:04.169654  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:04.186781  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:17:04.186808  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:17:04.252889  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:17:04.252921  185546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:17:04.315140  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.315166  185546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:17:04.395995  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.489084  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489122  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489426  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.489445  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489470  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.489481  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489490  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489763  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489781  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.497272  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.497297  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.497647  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.497677  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.497702  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185405  185546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015712456s)
	I1028 12:17:05.185458  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185469  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.185749  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.185768  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185778  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185786  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.186142  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.186160  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.186149  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.294924  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.294953  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295282  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295301  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295319  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295329  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.295339  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295584  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295615  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295622  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295641  185546 addons.go:475] Verifying addon metrics-server=true in "no-preload-871884"
	I1028 12:17:05.297689  185546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 12:17:02.557465  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:04.557517  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:05.298945  185546 addons.go:510] duration metric: took 1.541528913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 12:17:06.008731  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.766439  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:06.267839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:06.558978  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:09.058557  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:08.507261  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:10.508790  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:11.007666  185546 node_ready.go:49] node "no-preload-871884" has status "Ready":"True"
	I1028 12:17:11.007698  185546 node_ready.go:38] duration metric: took 7.003728813s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:11.007710  185546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:11.014677  185546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020020  185546 pod_ready.go:93] pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:11.020042  185546 pod_ready.go:82] duration metric: took 5.339994ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020053  185546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:08.765053  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.766104  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:11.556955  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.560411  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.027402  185546 pod_ready.go:103] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:14.026501  185546 pod_ready.go:93] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.026537  185546 pod_ready.go:82] duration metric: took 3.006475793s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.026552  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036355  185546 pod_ready.go:93] pod "kube-apiserver-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.036379  185546 pod_ready.go:82] duration metric: took 9.819102ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036391  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042711  185546 pod_ready.go:93] pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.042734  185546 pod_ready.go:82] duration metric: took 6.336523ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042745  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047387  185546 pod_ready.go:93] pod "kube-proxy-6rc4l" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.047409  185546 pod_ready.go:82] duration metric: took 4.657388ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047422  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208217  185546 pod_ready.go:93] pod "kube-scheduler-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.208243  185546 pod_ready.go:82] duration metric: took 160.813834ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208254  185546 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:16.214834  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.268493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:15.271377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.056296  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.557898  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.215767  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:20.219613  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:17.764493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.766909  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:21.769560  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:17:21.057114  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:23.058470  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:25.556210  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:22.714692  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.717301  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.269572  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:26.765467  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:28.056563  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:30.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:27.215356  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.216505  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.267239  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.765267  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:33.055776  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.056525  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.714959  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:33.715292  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.715824  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:34.267155  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:36.765199  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:37.557428  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.057039  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:37.715996  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.213916  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.765841  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:41.267477  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:42.556504  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.557351  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:42.216159  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.216286  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:43.764776  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.265654  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:46.565643  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.057078  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.716667  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.216308  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:48.267160  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.764737  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:51.058399  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.556450  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:55.557043  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:51.715736  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.215689  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:52.764839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.766020  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:57.269346  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:57.557519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.057963  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:56.715168  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:58.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.215445  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:59.271418  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.766158  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:02.556743  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.055348  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.217504  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.715462  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.768899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:06.266111  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:07.055842  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.057870  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:07.716593  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.215089  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:08.266990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.765441  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:11.556422  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.056678  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.216340  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.715086  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.766493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.766870  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.264729  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:16.057216  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.555816  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:20.556292  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.215803  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.215927  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.265178  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.268144  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:22.556987  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.057115  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.715576  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.715814  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.716043  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.767435  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:26.269805  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:27.556197  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.057036  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.216535  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.715902  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.765730  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:31.267947  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:32.556500  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.557446  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.214627  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:35.214782  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.772856  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:36.265722  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:36.559507  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.056993  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:37.215498  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.715541  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:38.266461  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.266611  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:42.268638  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:41.555847  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.558407  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:41.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.716370  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:46.214690  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:44.765752  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:47.266258  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.056747  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.058377  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.557006  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.715456  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.716120  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.267222  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:51.765814  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:18:52.557045  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.057031  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:53.215817  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.714784  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:54.268377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:56.764471  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:57.556098  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.057165  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:57.716269  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.214149  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:58.765585  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:01.265119  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.556763  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.557107  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:02.216779  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.714940  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:03.267189  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.268332  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:07.056718  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:09.056994  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:06.715788  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.716850  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:11.215701  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:07.765389  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:10.268458  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:11.559182  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.057546  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:13.216963  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:15.715550  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:12.764850  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.766597  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.265687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:16.057835  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:18.556414  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.716374  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.216629  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:19.266849  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:21.267040  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:21.056990  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.057233  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:25.555717  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:22.715323  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:24.715818  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.765935  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:26.264839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:27.556370  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.057786  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:27.216093  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:29.715861  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:28.265912  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.266993  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:32.267566  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:32.555888  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.556782  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:31.716178  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.213813  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.213999  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.764307  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.766217  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:37.056333  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.559461  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:38.214406  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:40.214795  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.264988  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:41.267329  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.056568  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:44.057256  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:42.715261  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.215472  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:43.765595  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.766087  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:46.058519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.556533  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:47.215644  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:49.715563  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.266115  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:50.268535  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:52.271227  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.056089  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:53.056973  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:55.057853  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:51.716787  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.214943  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.765208  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:57.267687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:19:57.556056  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.557091  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:00.050404  185942 pod_ready.go:82] duration metric: took 4m0.000726571s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:00.050457  185942 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:00.050479  185942 pod_ready.go:39] duration metric: took 4m12.759391454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:00.050506  185942 kubeadm.go:597] duration metric: took 4m20.427916933s to restartPrimaryControlPlane
	W1028 12:20:00.050569  185942 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:00.050616  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:19:56.715048  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.215821  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.769397  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:02.265702  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:01.715264  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.216628  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.765386  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:06.765802  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:06.715439  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.214561  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.216343  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.265899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.764867  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:13.716362  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.214893  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:14.264333  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.765340  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:18.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:20.715790  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:19.270934  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:21.764931  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:22.715880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:25.216499  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:23.766240  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.271567  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.353961  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.303321788s)
	I1028 12:20:26.354038  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:26.373066  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:26.386209  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:26.398568  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:26.398591  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:26.398634  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:26.410916  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:26.410976  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:26.423771  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:26.435883  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:26.435961  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:26.448506  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.460449  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:26.460506  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.472817  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:26.483653  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:26.483743  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:26.494435  185942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:26.682378  185942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:27.715587  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:29.717407  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:28.766206  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:30.766289  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.820344  185942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:20:35.820446  185942 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:20:35.820555  185942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:20:35.820688  185942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:20:35.820812  185942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:20:35.820902  185942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:20:35.823423  185942 out.go:235]   - Generating certificates and keys ...
	I1028 12:20:35.823594  185942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:20:35.823700  185942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:20:35.823804  185942 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:20:35.823893  185942 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:20:35.824001  185942 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:20:35.824082  185942 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:20:35.824167  185942 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:20:35.824255  185942 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:20:35.824360  185942 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:20:35.824445  185942 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:20:35.824504  185942 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:20:35.824566  185942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:20:35.824622  185942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:20:35.824725  185942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:20:35.824805  185942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:20:35.824944  185942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:20:35.825058  185942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:20:35.825209  185942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:20:35.825300  185942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:20:35.826890  185942 out.go:235]   - Booting up control plane ...
	I1028 12:20:35.827007  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:20:35.827077  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:20:35.827142  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:20:35.827285  185942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:20:35.827420  185942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:20:35.827487  185942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:20:35.827705  185942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:20:35.827848  185942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:20:35.827943  185942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.264999ms
	I1028 12:20:35.828059  185942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:20:35.828130  185942 kubeadm.go:310] [api-check] The API server is healthy after 5.502732581s
	I1028 12:20:35.828299  185942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:20:35.828472  185942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:20:35.828523  185942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:20:35.828712  185942 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:20:35.828764  185942 kubeadm.go:310] [bootstrap-token] Using token: srdxzz.lxk56bs7sgkeocij
	I1028 12:20:35.830228  185942 out.go:235]   - Configuring RBAC rules ...
	I1028 12:20:35.830335  185942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:20:35.830422  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:20:35.830563  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:20:35.830729  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:20:35.830842  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:20:35.830928  185942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:20:35.831065  185942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:20:35.831122  185942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:20:35.831174  185942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:20:35.831181  185942 kubeadm.go:310] 
	I1028 12:20:35.831229  185942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:20:35.831237  185942 kubeadm.go:310] 
	I1028 12:20:35.831302  185942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:20:35.831313  185942 kubeadm.go:310] 
	I1028 12:20:35.831356  185942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:20:35.831439  185942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:20:35.831517  185942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:20:35.831531  185942 kubeadm.go:310] 
	I1028 12:20:35.831616  185942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:20:35.831628  185942 kubeadm.go:310] 
	I1028 12:20:35.831678  185942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:20:35.831682  185942 kubeadm.go:310] 
	I1028 12:20:35.831730  185942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:20:35.831809  185942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:20:35.831921  185942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:20:35.831933  185942 kubeadm.go:310] 
	I1028 12:20:35.832041  185942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:20:35.832141  185942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:20:35.832150  185942 kubeadm.go:310] 
	I1028 12:20:35.832249  185942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832373  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:20:35.832404  185942 kubeadm.go:310] 	--control-plane 
	I1028 12:20:35.832414  185942 kubeadm.go:310] 
	I1028 12:20:35.832516  185942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:20:35.832524  185942 kubeadm.go:310] 
	I1028 12:20:35.832642  185942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832812  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:20:35.832833  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:20:35.832843  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:20:35.834428  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:20:35.835603  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:20:35.847857  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:20:35.867921  185942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:20:35.868088  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:35.868107  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:20:35.908233  185942 ops.go:34] apiserver oom_adj: -16
	I1028 12:20:32.215299  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:34.716880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:32.766922  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.267132  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:36.121114  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:36.621188  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.122032  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.621405  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.122105  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.621960  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.122142  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.622093  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.121643  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.287609  185942 kubeadm.go:1113] duration metric: took 4.419612649s to wait for elevateKubeSystemPrivileges
	I1028 12:20:40.287656  185942 kubeadm.go:394] duration metric: took 5m0.720591132s to StartCluster
	I1028 12:20:40.287703  185942 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.287814  185942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:20:40.290472  185942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.290787  185942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:20:40.291051  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:20:40.290926  185942 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:20:40.291125  185942 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709250"
	I1028 12:20:40.291126  185942 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709250"
	I1028 12:20:40.291142  185942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709250"
	I1028 12:20:40.291148  185942 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709250"
	W1028 12:20:40.291158  185942 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:20:40.291182  185942 addons.go:69] Setting metrics-server=true in profile "embed-certs-709250"
	I1028 12:20:40.291220  185942 addons.go:234] Setting addon metrics-server=true in "embed-certs-709250"
	W1028 12:20:40.291233  185942 addons.go:243] addon metrics-server should already be in state true
	I1028 12:20:40.291282  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291195  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291593  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291631  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291727  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291771  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291786  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291813  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.292877  185942 out.go:177] * Verifying Kubernetes components...
	I1028 12:20:40.294858  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:20:40.310225  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1028 12:20:40.310814  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.311524  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.311552  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.311961  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.312174  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.312867  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1028 12:20:40.312901  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I1028 12:20:40.313354  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313389  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313964  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.313987  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.313967  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.314040  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.314365  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314428  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314883  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.314907  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.315710  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.315744  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.316210  185942 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709250"
	W1028 12:20:40.316229  185942 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:20:40.316261  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.316619  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.316648  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.331940  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1028 12:20:40.332732  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.333487  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.333537  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.333932  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.334145  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.336054  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1028 12:20:40.336291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.336441  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337079  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.337117  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.337211  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I1028 12:20:40.337597  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337998  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338171  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.338189  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.338291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.338925  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338972  185942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:20:40.339570  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.339621  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.340197  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.341080  185942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.341099  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:20:40.341115  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.341872  185942 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:20:40.343244  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:20:40.343278  185942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:20:40.343308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.344718  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345186  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.345216  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345457  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.345666  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.345842  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.346053  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.346977  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347514  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.347546  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347739  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.347936  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.348069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.348236  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.357912  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I1028 12:20:40.358358  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.358838  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.358858  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.359224  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.359441  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.361308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.361630  185942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.361654  185942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:20:40.361675  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.365789  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366319  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.366347  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366659  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.366879  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.367069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.367245  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.526205  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:20:40.545404  185942 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555003  185942 node_ready.go:49] node "embed-certs-709250" has status "Ready":"True"
	I1028 12:20:40.555028  185942 node_ready.go:38] duration metric: took 9.592797ms for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555047  185942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:40.564021  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:40.660020  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:20:40.660061  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:20:40.666435  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.691423  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.692384  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:20:40.692411  185942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:20:40.739518  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:40.739549  185942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:20:40.765228  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:37.216347  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:39.716471  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.192384  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192422  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192491  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192514  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192740  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192759  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192783  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192791  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192915  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192942  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192951  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192962  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.193093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193125  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193131  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.193373  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193403  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193409  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.229776  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.229808  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.230111  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.230127  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.624688  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.624714  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625048  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.625055  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625066  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625074  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.625081  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625283  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625312  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625325  185942 addons.go:475] Verifying addon metrics-server=true in "embed-certs-709250"
	I1028 12:20:41.625329  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.627194  185942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:20:37.771166  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:40.265616  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.265990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.628572  185942 addons.go:510] duration metric: took 1.337655555s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:20:42.572801  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.571062  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.571095  185942 pod_ready.go:82] duration metric: took 3.007040788s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.571110  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576592  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.576620  185942 pod_ready.go:82] duration metric: took 5.500425ms for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576633  185942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:45.583586  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.216524  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:44.715547  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.758721  186547 pod_ready.go:82] duration metric: took 4m0.000295852s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:43.758758  186547 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:43.758783  186547 pod_ready.go:39] duration metric: took 4m13.710127509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:43.758811  186547 kubeadm.go:597] duration metric: took 4m21.647032906s to restartPrimaryControlPlane
	W1028 12:20:43.758873  186547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:43.758910  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:47.089478  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.089502  185942 pod_ready.go:82] duration metric: took 3.512861746s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.089512  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094229  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.094255  185942 pod_ready.go:82] duration metric: took 4.736326ms for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094267  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098823  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.098859  185942 pod_ready.go:82] duration metric: took 4.584003ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098872  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104063  185942 pod_ready.go:93] pod "kube-proxy-gck6r" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.104083  185942 pod_ready.go:82] duration metric: took 5.204526ms for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104091  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168177  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.168210  185942 pod_ready.go:82] duration metric: took 64.110225ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168221  185942 pod_ready.go:39] duration metric: took 6.613160968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:47.168243  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:20:47.168309  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:47.186907  185942 api_server.go:72] duration metric: took 6.896070864s to wait for apiserver process to appear ...
	I1028 12:20:47.186944  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:20:47.186998  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:20:47.191428  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:20:47.192677  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:20:47.192708  185942 api_server.go:131] duration metric: took 5.753471ms to wait for apiserver health ...
	I1028 12:20:47.192719  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:20:47.372534  185942 system_pods.go:59] 9 kube-system pods found
	I1028 12:20:47.372571  185942 system_pods.go:61] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.372580  185942 system_pods.go:61] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.372585  185942 system_pods.go:61] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.372590  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.372595  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.372599  185942 system_pods.go:61] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.372605  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.372614  185942 system_pods.go:61] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.372620  185942 system_pods.go:61] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.372633  185942 system_pods.go:74] duration metric: took 179.905205ms to wait for pod list to return data ...
	I1028 12:20:47.372647  185942 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:20:47.569853  185942 default_sa.go:45] found service account: "default"
	I1028 12:20:47.569886  185942 default_sa.go:55] duration metric: took 197.228265ms for default service account to be created ...
	I1028 12:20:47.569900  185942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:20:47.770906  185942 system_pods.go:86] 9 kube-system pods found
	I1028 12:20:47.770941  185942 system_pods.go:89] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.770948  185942 system_pods.go:89] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.770953  185942 system_pods.go:89] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.770956  185942 system_pods.go:89] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.770960  185942 system_pods.go:89] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.770964  185942 system_pods.go:89] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.770967  185942 system_pods.go:89] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.770973  185942 system_pods.go:89] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.770977  185942 system_pods.go:89] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.770984  185942 system_pods.go:126] duration metric: took 201.078078ms to wait for k8s-apps to be running ...
	I1028 12:20:47.770990  185942 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:20:47.771033  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:47.787139  185942 system_svc.go:56] duration metric: took 16.13776ms WaitForService to wait for kubelet
	I1028 12:20:47.787171  185942 kubeadm.go:582] duration metric: took 7.496343244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:20:47.787191  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:20:47.969485  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:20:47.969516  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:20:47.969547  185942 node_conditions.go:105] duration metric: took 182.350787ms to run NodePressure ...
	I1028 12:20:47.969562  185942 start.go:241] waiting for startup goroutines ...
	I1028 12:20:47.969572  185942 start.go:246] waiting for cluster config update ...
	I1028 12:20:47.969586  185942 start.go:255] writing updated cluster config ...
	I1028 12:20:47.969916  185942 ssh_runner.go:195] Run: rm -f paused
	I1028 12:20:48.021806  185942 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:20:48.023816  185942 out.go:177] * Done! kubectl is now configured to use "embed-certs-709250" cluster and "default" namespace by default
	I1028 12:20:46.716844  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:49.216673  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:51.715101  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:53.715509  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:56.217201  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:58.715405  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:00.715890  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:03.214669  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:05.215054  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.108895  186547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.349960271s)
	I1028 12:21:10.108979  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:10.126064  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:21:10.139862  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:21:10.150752  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:21:10.150780  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:21:10.150837  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:21:10.161522  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:21:10.161604  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:21:10.172230  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:21:10.183231  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:21:10.183299  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:21:10.194261  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.204462  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:21:10.204524  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.214991  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:21:10.225246  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:21:10.225315  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:21:10.235439  186547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:21:10.280951  186547 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:21:10.281020  186547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:21:10.391997  186547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:21:10.392163  186547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:21:10.392297  186547 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:21:10.402113  186547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:21:07.217549  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:09.716985  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.404087  186547 out.go:235]   - Generating certificates and keys ...
	I1028 12:21:10.404194  186547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:21:10.404312  186547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:21:10.404441  186547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:21:10.404537  186547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:21:10.404642  186547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:21:10.404719  186547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:21:10.404824  186547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:21:10.404914  186547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:21:10.405021  186547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:21:10.405124  186547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:21:10.405185  186547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:21:10.405269  186547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:21:10.608657  186547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:21:10.910608  186547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:21:11.076768  186547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:21:11.244109  186547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:21:11.685910  186547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:21:11.686470  186547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:21:11.692266  186547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:21:11.694100  186547 out.go:235]   - Booting up control plane ...
	I1028 12:21:11.694231  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:21:11.694377  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:21:11.694607  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:21:11.713908  186547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:21:11.720788  186547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:21:11.720874  186547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:21:11.856867  186547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:21:11.856998  186547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:21:12.358968  186547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942759ms
	I1028 12:21:12.359067  186547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:21:12.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:14.208408  185546 pod_ready.go:82] duration metric: took 4m0.000135609s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:21:14.208447  185546 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:21:14.208457  185546 pod_ready.go:39] duration metric: took 4m3.200735753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:14.208485  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:14.208519  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:14.208571  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:14.266154  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.266184  185546 cri.go:89] found id: ""
	I1028 12:21:14.266196  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:14.266255  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.271416  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:14.271497  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:14.310426  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.310457  185546 cri.go:89] found id: ""
	I1028 12:21:14.310467  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:14.310529  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.314961  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:14.315037  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:14.362502  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.362530  185546 cri.go:89] found id: ""
	I1028 12:21:14.362540  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:14.362602  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.368118  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:14.368198  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:14.416827  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.416867  185546 cri.go:89] found id: ""
	I1028 12:21:14.416877  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:14.416943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.421640  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:14.421716  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:14.473506  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:14.473552  185546 cri.go:89] found id: ""
	I1028 12:21:14.473563  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:14.473627  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.480106  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:14.480183  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:14.529939  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:14.529964  185546 cri.go:89] found id: ""
	I1028 12:21:14.529971  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:14.530120  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.536199  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:14.536264  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:14.578374  185546 cri.go:89] found id: ""
	I1028 12:21:14.578407  185546 logs.go:282] 0 containers: []
	W1028 12:21:14.578419  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:14.578428  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:14.578490  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:14.620216  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:14.620243  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:14.620249  185546 cri.go:89] found id: ""
	I1028 12:21:14.620258  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:14.620323  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.625798  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.630653  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:14.630683  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:14.645364  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:14.645404  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.686202  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:14.686234  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.730094  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:14.730125  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:14.786272  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:14.786322  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:14.875705  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:14.875746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.931913  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:14.931960  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.991914  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:14.991953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:15.037022  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:15.037056  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:15.107597  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:15.107649  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:15.161401  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:15.161442  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:15.201916  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:15.201953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:15.682647  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:15.682694  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:17.861193  186547 kubeadm.go:310] [api-check] The API server is healthy after 5.502448006s
	I1028 12:21:17.874856  186547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:21:17.889216  186547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:21:17.933411  186547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:21:17.933726  186547 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-349222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:21:17.964667  186547 kubeadm.go:310] [bootstrap-token] Using token: o3vo7c.1x7759cggrb8kl7r
	I1028 12:21:17.966405  186547 out.go:235]   - Configuring RBAC rules ...
	I1028 12:21:17.966590  186547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:21:17.982231  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:21:17.991850  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:21:17.996073  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:21:18.003531  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:21:18.008369  186547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:21:18.272751  186547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:21:18.724493  186547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:21:19.269583  186547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:21:19.270654  186547 kubeadm.go:310] 
	I1028 12:21:19.270715  186547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:21:19.270722  186547 kubeadm.go:310] 
	I1028 12:21:19.270782  186547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:21:19.270787  186547 kubeadm.go:310] 
	I1028 12:21:19.270816  186547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:21:19.270875  186547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:21:19.270938  186547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:21:19.270949  186547 kubeadm.go:310] 
	I1028 12:21:19.271022  186547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:21:19.271063  186547 kubeadm.go:310] 
	I1028 12:21:19.271165  186547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:21:19.271190  186547 kubeadm.go:310] 
	I1028 12:21:19.271266  186547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:21:19.271380  186547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:21:19.271470  186547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:21:19.271479  186547 kubeadm.go:310] 
	I1028 12:21:19.271600  186547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:21:19.271697  186547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:21:19.271709  186547 kubeadm.go:310] 
	I1028 12:21:19.271838  186547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272010  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:21:19.272068  186547 kubeadm.go:310] 	--control-plane 
	I1028 12:21:19.272079  186547 kubeadm.go:310] 
	I1028 12:21:19.272250  186547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:21:19.272270  186547 kubeadm.go:310] 
	I1028 12:21:19.272391  186547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272568  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:21:19.273899  186547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:21:19.273955  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:21:19.273977  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:21:19.275868  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:21:18.355132  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:18.373260  185546 api_server.go:72] duration metric: took 4m14.615888944s to wait for apiserver process to appear ...
	I1028 12:21:18.373292  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:18.373353  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:18.373410  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:18.413207  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.413239  185546 cri.go:89] found id: ""
	I1028 12:21:18.413250  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:18.413336  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.419588  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:18.419655  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:18.476341  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.476373  185546 cri.go:89] found id: ""
	I1028 12:21:18.476383  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:18.476450  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.482835  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:18.482926  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:18.524934  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.524964  185546 cri.go:89] found id: ""
	I1028 12:21:18.524975  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:18.525040  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.530198  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:18.530284  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:18.577310  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:18.577338  185546 cri.go:89] found id: ""
	I1028 12:21:18.577349  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:18.577413  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.583048  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:18.583133  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:18.622556  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:18.622587  185546 cri.go:89] found id: ""
	I1028 12:21:18.622598  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:18.622701  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.628450  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:18.628540  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:18.674827  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:18.674861  185546 cri.go:89] found id: ""
	I1028 12:21:18.674873  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:18.674943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.680282  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:18.680354  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:18.738014  185546 cri.go:89] found id: ""
	I1028 12:21:18.738044  185546 logs.go:282] 0 containers: []
	W1028 12:21:18.738061  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:18.738070  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:18.738142  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:18.780615  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:18.780645  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:18.780651  185546 cri.go:89] found id: ""
	I1028 12:21:18.780660  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:18.780725  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.786003  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.790208  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:18.790231  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:18.806481  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:18.806523  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.853343  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:18.853382  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.906386  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:18.906424  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.948149  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:18.948182  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:19.000642  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:19.000678  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:19.038715  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:19.038744  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:19.079234  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:19.079271  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:19.147309  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:19.147349  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:19.271582  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:19.271620  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:19.319149  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:19.319195  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:19.385399  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:19.385437  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:19.811993  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:19.812035  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:19.277402  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:21:19.296307  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:21:19.323315  186547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-349222 minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=default-k8s-diff-port-349222 minikube.k8s.io/primary=true
	I1028 12:21:19.550855  186547 ops.go:34] apiserver oom_adj: -16
	I1028 12:21:19.550882  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.051004  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.551001  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.051215  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.551283  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.050989  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.551423  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.051101  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.151453  186547 kubeadm.go:1113] duration metric: took 3.828156807s to wait for elevateKubeSystemPrivileges
	I1028 12:21:23.151505  186547 kubeadm.go:394] duration metric: took 5m1.103220882s to StartCluster
	I1028 12:21:23.151530  186547 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.151623  186547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:21:23.153557  186547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.153874  186547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:21:23.153996  186547 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:21:23.154101  186547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154122  186547 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154133  186547 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:21:23.154128  186547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154165  186547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-349222"
	I1028 12:21:23.154160  186547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154197  186547 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154213  186547 addons.go:243] addon metrics-server should already be in state true
	I1028 12:21:23.154167  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154254  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154664  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154679  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154749  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154135  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:21:23.154803  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154844  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154948  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.155649  186547 out.go:177] * Verifying Kubernetes components...
	I1028 12:21:23.157234  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:21:23.172278  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I1028 12:21:23.172870  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.173402  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.173429  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.173851  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.174056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.176299  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1028 12:21:23.176307  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I1028 12:21:23.176897  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177023  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177553  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177576  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177589  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177606  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177887  186547 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.177912  186547 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:21:23.177945  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.177971  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178030  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178369  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178404  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178541  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178572  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178961  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.179002  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.196089  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1028 12:21:23.197979  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.198578  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.198607  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.199082  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.199301  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.199604  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1028 12:21:23.200120  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.200519  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.200539  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.200938  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.201204  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.201711  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.201794  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1028 12:21:23.202225  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.202937  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.202956  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.203305  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.203753  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.203791  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.204026  186547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:21:23.204210  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.205470  186547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:21:23.205490  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:21:23.205554  186547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:21:23.205576  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.207334  186547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.207352  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:21:23.207372  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.209573  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.210230  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210366  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.210608  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.210806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.211061  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.211884  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.211910  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.211928  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.212104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.212351  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.212570  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.212762  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.231664  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1028 12:21:23.232283  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.232904  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.232929  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.233414  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.233658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.236162  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.236665  186547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.236680  186547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:21:23.236700  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.240368  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.240697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240848  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.241034  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.241156  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.241281  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.409461  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:21:23.430686  186547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442439  186547 node_ready.go:49] node "default-k8s-diff-port-349222" has status "Ready":"True"
	I1028 12:21:23.442466  186547 node_ready.go:38] duration metric: took 11.749381ms for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442480  186547 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:23.447741  186547 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:23.515393  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.545556  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.575253  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:21:23.575280  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:21:23.663892  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:21:23.663920  186547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:21:23.745621  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:23.745656  186547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:21:23.823360  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:24.391754  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.391789  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.392092  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.392112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.392123  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.392130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393697  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393716  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.393725  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.393733  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393810  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393828  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393886  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394088  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.394112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.413957  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.414000  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.414363  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.414385  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853053  186547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029641945s)
	I1028 12:21:24.853107  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853123  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853434  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.853492  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853501  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853518  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853543  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853784  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853801  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853813  186547 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-349222"
	I1028 12:21:24.855707  186547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:21:22.373623  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:21:22.379559  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:21:22.380750  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:22.380772  185546 api_server.go:131] duration metric: took 4.007460794s to wait for apiserver health ...
	I1028 12:21:22.380783  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:22.380811  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:22.380875  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:22.426678  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:22.426710  185546 cri.go:89] found id: ""
	I1028 12:21:22.426720  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:22.426781  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.431942  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:22.432014  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:22.472504  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:22.472531  185546 cri.go:89] found id: ""
	I1028 12:21:22.472540  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:22.472595  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.478446  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:22.478511  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:22.520149  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.520169  185546 cri.go:89] found id: ""
	I1028 12:21:22.520177  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:22.520235  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.525716  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:22.525804  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:22.564801  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:22.564832  185546 cri.go:89] found id: ""
	I1028 12:21:22.564844  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:22.564909  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.570065  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:22.570147  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:22.613601  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.613628  185546 cri.go:89] found id: ""
	I1028 12:21:22.613637  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:22.613700  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.618413  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:22.618483  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:22.664329  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.664358  185546 cri.go:89] found id: ""
	I1028 12:21:22.664369  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:22.664430  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.669013  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:22.669084  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:22.706046  185546 cri.go:89] found id: ""
	I1028 12:21:22.706074  185546 logs.go:282] 0 containers: []
	W1028 12:21:22.706084  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:22.706091  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:22.706159  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:22.747718  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.747744  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.747750  185546 cri.go:89] found id: ""
	I1028 12:21:22.747759  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:22.747825  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.752857  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.758383  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:22.758410  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.800846  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:22.800882  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.858663  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:22.858702  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.896915  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:22.896959  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.938476  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:22.938503  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.984601  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:22.984628  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:23.000223  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:23.000259  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:23.130709  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:23.130746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:23.189821  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:23.189859  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:23.244463  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:23.244535  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:23.299279  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:23.299318  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:23.714691  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:23.714730  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:23.777703  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:23.777749  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:26.364133  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:21:26.364166  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.364171  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.364175  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.364179  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.364182  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.364185  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.364191  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.364195  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.364201  185546 system_pods.go:74] duration metric: took 3.98341316s to wait for pod list to return data ...
	I1028 12:21:26.364209  185546 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:26.366899  185546 default_sa.go:45] found service account: "default"
	I1028 12:21:26.366925  185546 default_sa.go:55] duration metric: took 2.710943ms for default service account to be created ...
	I1028 12:21:26.366934  185546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:26.371193  185546 system_pods.go:86] 8 kube-system pods found
	I1028 12:21:26.371219  185546 system_pods.go:89] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.371224  185546 system_pods.go:89] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.371228  185546 system_pods.go:89] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.371233  185546 system_pods.go:89] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.371237  185546 system_pods.go:89] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.371240  185546 system_pods.go:89] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.371246  185546 system_pods.go:89] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.371250  185546 system_pods.go:89] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.371257  185546 system_pods.go:126] duration metric: took 4.318058ms to wait for k8s-apps to be running ...
	I1028 12:21:26.371265  185546 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:26.371317  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:26.389093  185546 system_svc.go:56] duration metric: took 17.81758ms WaitForService to wait for kubelet
	I1028 12:21:26.389131  185546 kubeadm.go:582] duration metric: took 4m22.631766189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:26.389158  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:26.392700  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:26.392728  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:26.392741  185546 node_conditions.go:105] duration metric: took 3.576663ms to run NodePressure ...
	I1028 12:21:26.392757  185546 start.go:241] waiting for startup goroutines ...
	I1028 12:21:26.392766  185546 start.go:246] waiting for cluster config update ...
	I1028 12:21:26.392781  185546 start.go:255] writing updated cluster config ...
	I1028 12:21:26.393086  185546 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:26.444274  185546 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:26.446322  185546 out.go:177] * Done! kubectl is now configured to use "no-preload-871884" cluster and "default" namespace by default
	I1028 12:21:24.856866  186547 addons.go:510] duration metric: took 1.702877543s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:21:25.462800  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:27.954511  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:30.454530  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.455161  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.955218  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.955242  186547 pod_ready.go:82] duration metric: took 9.507473956s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.955253  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.960990  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.961018  186547 pod_ready.go:82] duration metric: took 5.757431ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.961032  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966957  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.966981  186547 pod_ready.go:82] duration metric: took 5.940549ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966991  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972168  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.972194  186547 pod_ready.go:82] duration metric: took 5.195057ms for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972205  186547 pod_ready.go:39] duration metric: took 9.529713389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:32.972224  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:32.972294  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:32.988675  186547 api_server.go:72] duration metric: took 9.83476496s to wait for apiserver process to appear ...
	I1028 12:21:32.988711  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:32.988736  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:21:32.993068  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:21:32.994352  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:32.994377  186547 api_server.go:131] duration metric: took 5.656136ms to wait for apiserver health ...
	I1028 12:21:32.994387  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:32.999982  186547 system_pods.go:59] 9 kube-system pods found
	I1028 12:21:33.000010  186547 system_pods.go:61] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.000017  186547 system_pods.go:61] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.000024  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.000029  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.000033  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.000037  186547 system_pods.go:61] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.000040  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.000046  186547 system_pods.go:61] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.000051  186547 system_pods.go:61] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.000064  186547 system_pods.go:74] duration metric: took 5.66991ms to wait for pod list to return data ...
	I1028 12:21:33.000075  186547 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:33.003124  186547 default_sa.go:45] found service account: "default"
	I1028 12:21:33.003149  186547 default_sa.go:55] duration metric: took 3.067652ms for default service account to be created ...
	I1028 12:21:33.003159  186547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:33.155864  186547 system_pods.go:86] 9 kube-system pods found
	I1028 12:21:33.155902  186547 system_pods.go:89] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.155914  186547 system_pods.go:89] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.155921  186547 system_pods.go:89] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.155931  186547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.155938  186547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.155943  186547 system_pods.go:89] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.155948  186547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.155956  186547 system_pods.go:89] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.155965  186547 system_pods.go:89] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.155977  186547 system_pods.go:126] duration metric: took 152.809784ms to wait for k8s-apps to be running ...
	I1028 12:21:33.155991  186547 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:33.156049  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:33.171592  186547 system_svc.go:56] duration metric: took 15.589436ms WaitForService to wait for kubelet
	I1028 12:21:33.171647  186547 kubeadm.go:582] duration metric: took 10.017726239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:33.171672  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:33.352932  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:33.352984  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:33.352995  186547 node_conditions.go:105] duration metric: took 181.317488ms to run NodePressure ...
	I1028 12:21:33.353006  186547 start.go:241] waiting for startup goroutines ...
	I1028 12:21:33.353014  186547 start.go:246] waiting for cluster config update ...
	I1028 12:21:33.353024  186547 start.go:255] writing updated cluster config ...
	I1028 12:21:33.353314  186547 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:33.405276  186547 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:33.407589  186547 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-349222" cluster and "default" namespace by default
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.800527716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118243800493068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a65d2e76-824a-416c-82a4-744bb1c75100 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.801712843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4799802e-fee1-4f87-860f-e8b8e14ade64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.801905593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4799802e-fee1-4f87-860f-e8b8e14ade64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.802027536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4799802e-fee1-4f87-860f-e8b8e14ade64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.840715979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78c219ac-6eee-4896-8ec8-09d2191b3756 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.840806593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78c219ac-6eee-4896-8ec8-09d2191b3756 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.842068433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62f1a14d-73ba-43a4-b3e8-4a2d1c162610 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.843367083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118243843336168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62f1a14d-73ba-43a4-b3e8-4a2d1c162610 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.845360264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a3674aa-84f8-487f-8d4b-e9c5b865b3b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.845432365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a3674aa-84f8-487f-8d4b-e9c5b865b3b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.845468745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8a3674aa-84f8-487f-8d4b-e9c5b865b3b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.880803215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=667d6e7f-71ca-4850-95d7-ff8535e421a3 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.880937073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=667d6e7f-71ca-4850-95d7-ff8535e421a3 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.882116625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a2a2eb8-e36d-4a2e-b2f3-5728809a0886 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.882494593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118243882473924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a2a2eb8-e36d-4a2e-b2f3-5728809a0886 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.883168519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e5bb3af-960c-463b-982b-589d21c55c79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.883216356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e5bb3af-960c-463b-982b-589d21c55c79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.883248656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8e5bb3af-960c-463b-982b-589d21c55c79 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.919187080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d9f9d59-7f5e-446c-b40c-851fedeaf49f name=/runtime.v1.RuntimeService/Version
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.919306144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d9f9d59-7f5e-446c-b40c-851fedeaf49f name=/runtime.v1.RuntimeService/Version
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.920434935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f040ee36-3dd9-4809-a085-0a8e39dc92b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.920822699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118243920799508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f040ee36-3dd9-4809-a085-0a8e39dc92b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.921500375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8c249d3-7e46-437d-b5d9-ddfd215790db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.921566049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8c249d3-7e46-437d-b5d9-ddfd215790db name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:24:03 old-k8s-version-089993 crio[635]: time="2024-10-28 12:24:03.921624455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d8c249d3-7e46-437d-b5d9-ddfd215790db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 12:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049869] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.987135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.705731] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652068] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.124100] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.059356] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067583] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.203906] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.129426] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.273379] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Oct28 12:16] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.076324] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.030052] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[ +12.368021] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 12:20] systemd-fstab-generator[5004]: Ignoring "noauto" option for root device
	[Oct28 12:22] systemd-fstab-generator[5284]: Ignoring "noauto" option for root device
	[  +0.072681] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:24:04 up 8 min,  0 users,  load average: 0.03, 0.13, 0.08
	Linux old-k8s-version-089993 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00024ec40, 0xc00073ea80, 0x1, 0x0, 0x0)
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000895340)
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: goroutine 136 [select]:
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00041f590, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001fef00, 0x0, 0x0)
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000895340)
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 28 12:24:01 old-k8s-version-089993 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 12:24:01 old-k8s-version-089993 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 12:24:01 old-k8s-version-089993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 28 12:24:01 old-k8s-version-089993 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 12:24:01 old-k8s-version-089993 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5513]: I1028 12:24:01.729963    5513 server.go:416] Version: v1.20.0
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5513]: I1028 12:24:01.730365    5513 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5513]: I1028 12:24:01.732333    5513 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5513]: W1028 12:24:01.733193    5513 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 28 12:24:01 old-k8s-version-089993 kubelet[5513]: I1028 12:24:01.733382    5513 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (235.633019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-089993" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (727.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222: exit status 3 (3.167624594s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:12:53.189971  186437 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host
	E1028 12:12:53.189991  186437 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-349222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-349222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154796835s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-349222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222: exit status 3 (3.060779985s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1028 12:13:02.405908  186517 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host
	E1028 12:13:02.405931  186517 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.75:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349222" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709250 -n embed-certs-709250
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 12:29:48.60560735 +0000 UTC m=+5711.273831741
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-709250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-709250 logs -n 25: (2.115216744s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:13:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:13:02.452508  186547 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:13:02.452621  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452630  186547 out.go:358] Setting ErrFile to fd 2...
	I1028 12:13:02.452635  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452828  186547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:13:02.453378  186547 out.go:352] Setting JSON to false
	I1028 12:13:02.454320  186547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6925,"bootTime":1730110657,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:13:02.454420  186547 start.go:139] virtualization: kvm guest
	I1028 12:13:02.456754  186547 out.go:177] * [default-k8s-diff-port-349222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:13:02.458343  186547 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:13:02.458413  186547 notify.go:220] Checking for updates...
	I1028 12:13:02.460946  186547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:13:02.462089  186547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:13:02.463460  186547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:13:02.464649  186547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:13:02.466107  186547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:13:02.468142  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:13:02.468518  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.468587  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.483793  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1028 12:13:02.484273  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.484861  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.484884  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.485260  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.485471  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.485712  186547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:13:02.485997  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.486030  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.501110  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1028 12:13:02.501722  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.502335  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.502362  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.502682  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.502878  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.539766  186547 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:13:02.541024  186547 start.go:297] selected driver: kvm2
	I1028 12:13:02.541038  186547 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.541168  186547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:13:02.541929  186547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.542014  186547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:13:02.557443  186547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:13:02.557868  186547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:13:02.557902  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:13:02.557947  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:13:02.557987  186547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.558098  186547 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.560651  186547 out.go:177] * Starting "default-k8s-diff-port-349222" primary control-plane node in "default-k8s-diff-port-349222" cluster
	I1028 12:13:02.693744  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:02.561767  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:13:02.561800  186547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:13:02.561810  186547 cache.go:56] Caching tarball of preloaded images
	I1028 12:13:02.561877  186547 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:13:02.561887  186547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:13:02.561973  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:13:02.562165  186547 start.go:360] acquireMachinesLock for default-k8s-diff-port-349222: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:13:08.773770  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:11.845825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:17.925957  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:20.997733  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:27.077858  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:30.149737  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:36.229851  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:39.301764  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:45.381781  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:48.453767  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:54.533793  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:57.605754  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:03.685848  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:06.757874  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:12.837829  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:15.909778  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:21.989850  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:25.061812  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:31.141825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:34.213757  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:40.293844  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:43.365865  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:49.445872  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:52.517750  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:58.597834  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:01.669837  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:07.749853  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:10.821838  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:13.826298  185942 start.go:364] duration metric: took 3m37.788021766s to acquireMachinesLock for "embed-certs-709250"
	I1028 12:15:13.826369  185942 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:13.826382  185942 fix.go:54] fixHost starting: 
	I1028 12:15:13.827047  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:13.827113  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:13.842889  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1028 12:15:13.843403  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:13.843915  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:15:13.843938  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:13.844374  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:13.844568  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:13.844733  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:15:13.846440  185942 fix.go:112] recreateIfNeeded on embed-certs-709250: state=Stopped err=<nil>
	I1028 12:15:13.846464  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	W1028 12:15:13.846629  185942 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:13.848878  185942 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709250" ...
	I1028 12:15:13.850607  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Start
	I1028 12:15:13.850800  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring networks are active...
	I1028 12:15:13.851930  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network default is active
	I1028 12:15:13.852331  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network mk-embed-certs-709250 is active
	I1028 12:15:13.852652  185942 main.go:141] libmachine: (embed-certs-709250) Getting domain xml...
	I1028 12:15:13.853394  185942 main.go:141] libmachine: (embed-certs-709250) Creating domain...
	I1028 12:15:15.098667  185942 main.go:141] libmachine: (embed-certs-709250) Waiting to get IP...
	I1028 12:15:15.099525  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.099919  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.099951  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.099877  187018 retry.go:31] will retry after 285.25732ms: waiting for machine to come up
	I1028 12:15:15.386531  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.386992  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.387023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.386921  187018 retry.go:31] will retry after 327.08041ms: waiting for machine to come up
	I1028 12:15:15.715435  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.715900  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.715928  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.715846  187018 retry.go:31] will retry after 443.083162ms: waiting for machine to come up
	I1028 12:15:13.823652  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:13.823724  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824056  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:15:13.824085  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824284  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:15:13.826158  185546 machine.go:96] duration metric: took 4m37.413470632s to provisionDockerMachine
	I1028 12:15:13.826202  185546 fix.go:56] duration metric: took 4m37.436313043s for fixHost
	I1028 12:15:13.826208  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 4m37.436350273s
	W1028 12:15:13.826226  185546 start.go:714] error starting host: provision: host is not running
	W1028 12:15:13.826336  185546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 12:15:13.826346  185546 start.go:729] Will try again in 5 seconds ...
	I1028 12:15:16.160595  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.161024  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.161045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.161003  187018 retry.go:31] will retry after 599.535995ms: waiting for machine to come up
	I1028 12:15:16.761771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.762167  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.762213  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.762114  187018 retry.go:31] will retry after 527.275961ms: waiting for machine to come up
	I1028 12:15:17.290788  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:17.291124  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:17.291145  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:17.291098  187018 retry.go:31] will retry after 858.175967ms: waiting for machine to come up
	I1028 12:15:18.150644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.151045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.151071  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.150996  187018 retry.go:31] will retry after 727.962346ms: waiting for machine to come up
	I1028 12:15:18.880545  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.880990  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.881020  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.880942  187018 retry.go:31] will retry after 1.184956373s: waiting for machine to come up
	I1028 12:15:20.067178  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:20.067603  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:20.067635  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:20.067553  187018 retry.go:31] will retry after 1.635056202s: waiting for machine to come up
	I1028 12:15:18.827987  185546 start.go:360] acquireMachinesLock for no-preload-871884: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:21.703941  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:21.704338  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:21.704365  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:21.704302  187018 retry.go:31] will retry after 1.865473383s: waiting for machine to come up
	I1028 12:15:23.572362  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:23.572816  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:23.572843  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:23.572773  187018 retry.go:31] will retry after 2.604970031s: waiting for machine to come up
	I1028 12:15:26.181289  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:26.181849  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:26.181880  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:26.181788  187018 retry.go:31] will retry after 2.866004055s: waiting for machine to come up
	I1028 12:15:29.049604  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:29.050029  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:29.050068  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:29.049970  187018 retry.go:31] will retry after 3.046879869s: waiting for machine to come up
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:32.100252  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.100812  185942 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:15:32.100831  185942 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:15:32.100842  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.101552  185942 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:15:32.101568  185942 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:15:32.101602  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.101629  185942 main.go:141] libmachine: (embed-certs-709250) DBG | skip adding static IP to network mk-embed-certs-709250 - found existing host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"}
	I1028 12:15:32.101644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:15:32.104041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.104356  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104459  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:15:32.104488  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:15:32.104519  185942 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:32.104530  185942 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:15:32.104538  185942 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:15:32.233966  185942 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:32.234363  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:15:32.235010  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.237443  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.237755  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.237783  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.238040  185942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:15:32.238272  185942 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:32.238291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:32.238541  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.240765  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241165  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.241212  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241333  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.241520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241704  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241836  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.241989  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.242234  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.242247  185942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:32.358412  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:32.358443  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.358773  185942 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:15:32.358810  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.359027  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.361776  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362122  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.362161  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362262  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.362429  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362579  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362709  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.362867  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.363084  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.363098  185942 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:15:32.492437  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:15:32.492466  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.495108  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495394  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.495438  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495586  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.495771  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.495927  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.496054  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.496215  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.496399  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.496416  185942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:32.619038  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:32.619074  185942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:32.619113  185942 buildroot.go:174] setting up certificates
	I1028 12:15:32.619125  185942 provision.go:84] configureAuth start
	I1028 12:15:32.619137  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.619451  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.622055  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622448  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.622479  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622593  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.624610  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625037  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.625066  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625086  185942 provision.go:143] copyHostCerts
	I1028 12:15:32.625174  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:32.625190  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:32.625259  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:32.625396  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:32.625407  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:32.625444  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:32.625519  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:32.625541  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:32.625575  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:32.625645  185942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:15:32.684483  185942 provision.go:177] copyRemoteCerts
	I1028 12:15:32.684547  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:32.684576  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.686926  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687244  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.687284  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687427  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.687617  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.687744  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.687890  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:32.776282  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:32.802180  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:15:32.829609  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:32.854274  185942 provision.go:87] duration metric: took 235.133526ms to configureAuth
	I1028 12:15:32.854305  185942 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:32.854474  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:15:32.854547  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.857363  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.857736  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.857771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.858038  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.858251  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858442  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858652  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.858809  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.858979  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.858996  185942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:33.101841  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:33.101870  185942 machine.go:96] duration metric: took 863.584969ms to provisionDockerMachine
	I1028 12:15:33.101883  185942 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:15:33.101896  185942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:33.101911  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.102249  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:33.102285  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.105023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.105357  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105493  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.105710  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.105881  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.106032  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.193225  185942 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:33.197548  185942 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:33.197570  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:33.197637  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:33.197739  185942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:33.197861  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:33.207962  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:33.231808  185942 start.go:296] duration metric: took 129.908529ms for postStartSetup
	I1028 12:15:33.231853  185942 fix.go:56] duration metric: took 19.405472942s for fixHost
	I1028 12:15:33.231875  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.234609  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.234943  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.234979  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.235167  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.235370  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235642  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.235806  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:33.236026  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:33.236041  185942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:33.350639  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117733.322211717
	
	I1028 12:15:33.350663  185942 fix.go:216] guest clock: 1730117733.322211717
	I1028 12:15:33.350673  185942 fix.go:229] Guest: 2024-10-28 12:15:33.322211717 +0000 UTC Remote: 2024-10-28 12:15:33.231858201 +0000 UTC m=+237.345598419 (delta=90.353516ms)
	I1028 12:15:33.350707  185942 fix.go:200] guest clock delta is within tolerance: 90.353516ms
	I1028 12:15:33.350714  185942 start.go:83] releasing machines lock for "embed-certs-709250", held for 19.524379046s
	I1028 12:15:33.350737  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.350974  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:33.353647  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354012  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.354041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354244  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354690  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354873  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354973  185942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:33.355017  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.355090  185942 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:33.355116  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.357679  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358050  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358074  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358242  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358389  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.358542  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.358584  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358616  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358681  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.358721  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358892  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.359048  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.359197  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.443468  185942 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:33.498501  185942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:33.642221  185942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:33.649269  185942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:33.649336  185942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:33.665990  185942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:33.666023  185942 start.go:495] detecting cgroup driver to use...
	I1028 12:15:33.666103  185942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:33.683188  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:33.699441  185942 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:33.699506  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:33.714192  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:33.728325  185942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:33.850801  185942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:34.028929  185942 docker.go:233] disabling docker service ...
	I1028 12:15:34.028991  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:34.045600  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:34.059450  185942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:34.182310  185942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:34.305346  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:34.322354  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:34.342738  185942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:15:34.342804  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.354622  185942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:34.354687  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.365663  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.376503  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.388360  185942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:34.399960  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.419087  185942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.439700  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.451425  185942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:34.461657  185942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:34.461710  185942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:34.476292  185942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:34.487186  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:34.614984  185942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:34.709983  185942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:34.710061  185942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:34.715204  185942 start.go:563] Will wait 60s for crictl version
	I1028 12:15:34.715268  185942 ssh_runner.go:195] Run: which crictl
	I1028 12:15:34.719459  185942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:34.760330  185942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:34.760407  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.788635  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.820113  185942 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:15:34.821282  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:34.824384  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.824719  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:34.824746  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.825032  185942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:34.829502  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:34.842695  185942 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:34.842845  185942 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:15:34.842897  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:34.881154  185942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:15:34.881218  185942 ssh_runner.go:195] Run: which lz4
	I1028 12:15:34.885630  185942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:34.890045  185942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:34.890075  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:36.448704  185942 crio.go:462] duration metric: took 1.563096609s to copy over tarball
	I1028 12:15:36.448792  185942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:38.703177  185942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25435315s)
	I1028 12:15:38.703207  185942 crio.go:469] duration metric: took 2.254465841s to extract the tarball
	I1028 12:15:38.703217  185942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:38.741005  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:38.788350  185942 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:15:38.788376  185942 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:15:38.788383  185942 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:15:38.788491  185942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:15:38.788558  185942 ssh_runner.go:195] Run: crio config
	I1028 12:15:38.835642  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:38.835667  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:38.835678  185942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:15:38.835701  185942 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:15:38.835822  185942 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:15:38.835879  185942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:15:38.846832  185942 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:15:38.846925  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:15:38.857103  185942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:15:38.874531  185942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:15:38.892213  185942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:15:38.910949  185942 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:15:38.915391  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:38.928874  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:39.045969  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:15:39.063425  185942 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:15:39.063449  185942 certs.go:194] generating shared ca certs ...
	I1028 12:15:39.063465  185942 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:15:39.063638  185942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:15:39.063693  185942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:15:39.063709  185942 certs.go:256] generating profile certs ...
	I1028 12:15:39.063810  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:15:39.063893  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:15:39.063951  185942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:15:39.064107  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:15:39.064153  185942 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:15:39.064167  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:15:39.064202  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:15:39.064239  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:15:39.064272  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:15:39.064335  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:39.064972  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:15:39.103261  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:15:39.145102  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:15:39.175151  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:15:39.205220  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:15:39.236045  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:15:39.273622  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:15:39.299336  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:15:39.325277  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:15:39.349878  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:15:39.374466  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:15:39.398920  185942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:15:39.416280  185942 ssh_runner.go:195] Run: openssl version
	I1028 12:15:39.422478  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:15:39.434671  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439581  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439635  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.445736  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:15:39.457128  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:15:39.468602  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473229  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473306  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.479063  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:15:39.490370  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:15:39.501843  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506514  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506579  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.512633  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:15:39.524115  185942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:15:39.528804  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:15:39.534982  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:15:39.541214  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:15:39.547734  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:15:39.554143  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:15:39.560719  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:15:39.567076  185942 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:15:39.567173  185942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:15:39.567226  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.611567  185942 cri.go:89] found id: ""
	I1028 12:15:39.611644  185942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:15:39.622561  185942 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:15:39.622583  185942 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:15:39.622637  185942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:15:39.632757  185942 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:15:39.633873  185942 kubeconfig.go:125] found "embed-certs-709250" server: "https://192.168.39.211:8443"
	I1028 12:15:39.635943  185942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:15:39.646060  185942 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1028 12:15:39.646104  185942 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:15:39.646119  185942 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:15:39.646177  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.686806  185942 cri.go:89] found id: ""
	I1028 12:15:39.686891  185942 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:15:39.703935  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:15:39.714319  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:15:39.714341  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:15:39.714389  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:15:39.725383  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:15:39.725452  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:15:39.737075  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:15:39.748226  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:15:39.748311  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:15:39.760111  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.770287  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:15:39.770365  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.780776  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:15:39.790412  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:15:39.790475  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:15:39.800727  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:15:39.811331  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:39.926791  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:41.043020  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11617977s)
	I1028 12:15:41.043082  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.246311  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.309073  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.392313  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:15:41.392425  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:41.893601  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.393518  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.444753  185942 api_server.go:72] duration metric: took 1.052438751s to wait for apiserver process to appear ...
	I1028 12:15:42.444794  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:15:42.444821  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.214786  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.214821  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.214837  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.252422  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.252458  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.445825  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.451454  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.451549  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:45.945668  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.956623  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.956667  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.445240  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.450197  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:46.450223  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.945901  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.950302  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:15:46.956218  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:15:46.956245  185942 api_server.go:131] duration metric: took 4.511443878s to wait for apiserver health ...
	I1028 12:15:46.956254  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:46.956260  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:46.958294  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:46.959738  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:15:46.970473  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:15:46.994129  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:15:47.003807  185942 system_pods.go:59] 8 kube-system pods found
	I1028 12:15:47.003843  185942 system_pods.go:61] "coredns-7c65d6cfc9-j66cd" [d53b2839-00f6-4ccc-833d-76424b3efdba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:15:47.003851  185942 system_pods.go:61] "etcd-embed-certs-709250" [24761127-dde4-4f5d-b7cf-a13e37366e0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:15:47.003858  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [17996153-32c3-41e0-be90-fc9e058e0080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:15:47.003864  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [4ce37c00-1015-45f8-b847-1ca92cdf3a31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:15:47.003871  185942 system_pods.go:61] "kube-proxy-dl7xq" [a06ed5ff-b1e9-42c7-ba26-f120bb03ccb6] Running
	I1028 12:15:47.003877  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [c76e654e-a7fc-4891-8e73-bd74f9178c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:15:47.003883  185942 system_pods.go:61] "metrics-server-6867b74b74-k69kz" [568d5308-3f66-459b-b5c8-594d9400b6c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:15:47.003886  185942 system_pods.go:61] "storage-provisioner" [6552cef1-21b6-4306-a3e2-ff16793257dc] Running
	I1028 12:15:47.003893  185942 system_pods.go:74] duration metric: took 9.734271ms to wait for pod list to return data ...
	I1028 12:15:47.003900  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:15:47.008428  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:15:47.008465  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:15:47.008479  185942 node_conditions.go:105] duration metric: took 4.573275ms to run NodePressure ...
	I1028 12:15:47.008504  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:47.285509  185942 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291045  185942 kubeadm.go:739] kubelet initialised
	I1028 12:15:47.291069  185942 kubeadm.go:740] duration metric: took 5.521713ms waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291078  185942 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:15:47.299072  185942 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:49.312365  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:50.804953  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:50.804976  185942 pod_ready.go:82] duration metric: took 3.505873868s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:50.804986  185942 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378422  186547 start.go:364] duration metric: took 2m50.816229865s to acquireMachinesLock for "default-k8s-diff-port-349222"
	I1028 12:15:53.378482  186547 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:53.378491  186547 fix.go:54] fixHost starting: 
	I1028 12:15:53.378917  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:53.378971  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:53.395967  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I1028 12:15:53.396434  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:53.396923  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:15:53.396950  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:53.397332  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:53.397552  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:15:53.397726  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:15:53.399287  186547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349222: state=Stopped err=<nil>
	I1028 12:15:53.399337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	W1028 12:15:53.399468  186547 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:53.401446  186547 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-349222" ...
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:52.838774  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.811974  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:53.811997  185942 pod_ready.go:82] duration metric: took 3.00700476s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:53.812008  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:55.824400  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.402920  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Start
	I1028 12:15:53.403172  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring networks are active...
	I1028 12:15:53.403912  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network default is active
	I1028 12:15:53.404195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network mk-default-k8s-diff-port-349222 is active
	I1028 12:15:53.404615  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Getting domain xml...
	I1028 12:15:53.405554  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Creating domain...
	I1028 12:15:54.734540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting to get IP...
	I1028 12:15:54.735417  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735784  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735880  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:54.735759  187305 retry.go:31] will retry after 268.036011ms: waiting for machine to come up
	I1028 12:15:55.005376  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.005999  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.006032  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.005930  187305 retry.go:31] will retry after 255.477665ms: waiting for machine to come up
	I1028 12:15:55.263500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264118  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264153  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.264087  187305 retry.go:31] will retry after 354.942061ms: waiting for machine to come up
	I1028 12:15:55.620877  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621664  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621698  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.621610  187305 retry.go:31] will retry after 569.620755ms: waiting for machine to come up
	I1028 12:15:56.192393  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192872  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.192803  187305 retry.go:31] will retry after 703.637263ms: waiting for machine to come up
	I1028 12:15:56.897762  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898304  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.898214  187305 retry.go:31] will retry after 713.628482ms: waiting for machine to come up
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:58.321600  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:00.018244  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.018285  185942 pod_ready.go:82] duration metric: took 6.206271068s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.018297  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028610  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.028638  185942 pod_ready.go:82] duration metric: took 10.334289ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028653  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041057  185942 pod_ready.go:93] pod "kube-proxy-dl7xq" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.041091  185942 pod_ready.go:82] duration metric: took 12.429027ms for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041106  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049617  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.049645  185942 pod_ready.go:82] duration metric: took 8.529436ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049659  185942 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:57.613338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613844  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613873  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:57.613796  187305 retry.go:31] will retry after 1.188479203s: waiting for machine to come up
	I1028 12:15:58.803300  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803690  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803724  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:58.803650  187305 retry.go:31] will retry after 1.439057212s: waiting for machine to come up
	I1028 12:16:00.244665  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245201  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245239  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:00.245141  187305 retry.go:31] will retry after 1.842038011s: waiting for machine to come up
	I1028 12:16:02.090283  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090879  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:02.090828  187305 retry.go:31] will retry after 1.556155538s: waiting for machine to come up
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.165378  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:04.555857  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:03.648337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648760  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:03.648736  187305 retry.go:31] will retry after 2.586516153s: waiting for machine to come up
	I1028 12:16:06.236934  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237402  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237433  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:06.237352  187305 retry.go:31] will retry after 3.507901898s: waiting for machine to come up
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.557581  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.056276  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.746980  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747449  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747482  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:09.747401  187305 retry.go:31] will retry after 4.499585546s: waiting for machine to come up
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.487114  185546 start.go:364] duration metric: took 56.6590668s to acquireMachinesLock for "no-preload-871884"
	I1028 12:16:15.487176  185546 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:16:15.487190  185546 fix.go:54] fixHost starting: 
	I1028 12:16:15.487650  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:16:15.487713  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:16:15.508857  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1028 12:16:15.509318  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:16:15.510000  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:16:15.510037  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:16:15.510385  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:16:15.510599  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:15.510779  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:16:15.512738  185546 fix.go:112] recreateIfNeeded on no-preload-871884: state=Stopped err=<nil>
	I1028 12:16:15.512772  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	W1028 12:16:15.512963  185546 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:16:15.514890  185546 out.go:177] * Restarting existing kvm2 VM for "no-preload-871884" ...
	I1028 12:16:11.056427  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:13.058549  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.556621  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.516551  185546 main.go:141] libmachine: (no-preload-871884) Calling .Start
	I1028 12:16:15.516786  185546 main.go:141] libmachine: (no-preload-871884) Ensuring networks are active...
	I1028 12:16:15.517934  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network default is active
	I1028 12:16:15.518543  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network mk-no-preload-871884 is active
	I1028 12:16:15.519028  185546 main.go:141] libmachine: (no-preload-871884) Getting domain xml...
	I1028 12:16:15.519878  185546 main.go:141] libmachine: (no-preload-871884) Creating domain...
	I1028 12:16:14.249128  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249645  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has current primary IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249674  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Found IP for machine: 192.168.50.75
	I1028 12:16:14.249689  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserving static IP address...
	I1028 12:16:14.250120  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserved static IP address: 192.168.50.75
	I1028 12:16:14.250139  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for SSH to be available...
	I1028 12:16:14.250164  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.250205  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | skip adding static IP to network mk-default-k8s-diff-port-349222 - found existing host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"}
	I1028 12:16:14.250222  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Getting to WaitForSSH function...
	I1028 12:16:14.252540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.252883  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.252926  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.253035  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH client type: external
	I1028 12:16:14.253075  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa (-rw-------)
	I1028 12:16:14.253100  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:14.253115  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | About to run SSH command:
	I1028 12:16:14.253129  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | exit 0
	I1028 12:16:14.373688  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:14.374101  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetConfigRaw
	I1028 12:16:14.374713  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.377338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.377824  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.377857  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.378094  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:16:14.378326  186547 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:14.378345  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:14.378556  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.380694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.380976  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.380992  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.381143  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.381356  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381521  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381678  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.381882  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.382107  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.382119  186547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:14.490030  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:14.490061  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490303  186547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-349222"
	I1028 12:16:14.490331  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490523  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.492989  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493395  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.493426  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493626  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.493794  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.493960  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.494104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.494258  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.494427  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.494439  186547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-349222 && echo "default-k8s-diff-port-349222" | sudo tee /etc/hostname
	I1028 12:16:14.604373  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-349222
	
	I1028 12:16:14.604405  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.607135  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607437  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.607465  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.607891  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608060  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608187  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.608353  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.608549  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.608569  186547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-349222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-349222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-349222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:14.714933  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:14.714963  186547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:14.714990  186547 buildroot.go:174] setting up certificates
	I1028 12:16:14.714998  186547 provision.go:84] configureAuth start
	I1028 12:16:14.715007  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.715321  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.718051  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.718406  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718504  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.720638  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.720945  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.720972  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.721127  186547 provision.go:143] copyHostCerts
	I1028 12:16:14.721198  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:14.721213  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:14.721283  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:14.721407  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:14.721417  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:14.721446  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:14.721522  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:14.721544  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:14.721571  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:14.721634  186547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-349222 san=[127.0.0.1 192.168.50.75 default-k8s-diff-port-349222 localhost minikube]
	I1028 12:16:14.854227  186547 provision.go:177] copyRemoteCerts
	I1028 12:16:14.854285  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:14.854314  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.857250  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857590  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.857620  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857897  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.858091  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.858286  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.858434  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:14.940752  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:14.967575  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 12:16:14.992693  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:16:15.017801  186547 provision.go:87] duration metric: took 302.790563ms to configureAuth
	I1028 12:16:15.017831  186547 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:15.018073  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:15.018168  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.021181  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.021574  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021719  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.021894  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022113  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022317  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.022564  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.022744  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.022761  186547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:15.257308  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:15.257339  186547 machine.go:96] duration metric: took 878.998573ms to provisionDockerMachine
	I1028 12:16:15.257350  186547 start.go:293] postStartSetup for "default-k8s-diff-port-349222" (driver="kvm2")
	I1028 12:16:15.257360  186547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:15.257378  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.257695  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:15.257721  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.260380  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260767  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.260795  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260990  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.261186  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.261370  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.261513  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.341376  186547 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:15.345736  186547 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:15.345760  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:15.345820  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:15.345891  186547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:15.345978  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:15.355662  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:15.381750  186547 start.go:296] duration metric: took 124.385481ms for postStartSetup
	I1028 12:16:15.381788  186547 fix.go:56] duration metric: took 22.00329785s for fixHost
	I1028 12:16:15.381807  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.384756  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385099  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.385130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385359  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.385587  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385782  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385974  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.386165  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.386345  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.386355  186547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:15.486905  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117775.445749296
	
	I1028 12:16:15.486934  186547 fix.go:216] guest clock: 1730117775.445749296
	I1028 12:16:15.486944  186547 fix.go:229] Guest: 2024-10-28 12:16:15.445749296 +0000 UTC Remote: 2024-10-28 12:16:15.381791731 +0000 UTC m=+192.967058764 (delta=63.957565ms)
	I1028 12:16:15.487005  186547 fix.go:200] guest clock delta is within tolerance: 63.957565ms
	I1028 12:16:15.487018  186547 start.go:83] releasing machines lock for "default-k8s-diff-port-349222", held for 22.108560462s
	I1028 12:16:15.487082  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.487382  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:15.490840  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491343  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.491374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491528  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492208  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492431  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492581  186547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:15.492657  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.492706  186547 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:15.492746  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.496062  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496119  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496544  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496901  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497225  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497257  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497458  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497583  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497665  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.497798  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497977  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.590741  186547 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:15.615347  186547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:15.762979  186547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:15.770132  186547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:15.770221  186547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:15.788651  186547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:15.788684  186547 start.go:495] detecting cgroup driver to use...
	I1028 12:16:15.788751  186547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:15.806118  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:15.820916  186547 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:15.820986  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:15.835770  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:15.850994  186547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:15.979465  186547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:16.160837  186547 docker.go:233] disabling docker service ...
	I1028 12:16:16.160924  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:16.177934  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:16.194616  186547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:16.320605  186547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:16.464175  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:16.479626  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:16.502747  186547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:16.502889  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.514636  186547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:16.514695  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.528137  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.539961  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.552263  186547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:16.566275  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.578632  186547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.599084  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.611250  186547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:16.621976  186547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:16.622052  186547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:16.640800  186547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:16.651767  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:16.806628  186547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:16.903584  186547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:16.903655  186547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:16.909873  186547 start.go:563] Will wait 60s for crictl version
	I1028 12:16:16.909950  186547 ssh_runner.go:195] Run: which crictl
	I1028 12:16:16.915388  186547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:16.964424  186547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:16.964517  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:16.997415  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:17.032323  186547 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:17.033747  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:17.036500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.036903  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:17.036935  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.037118  186547 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:17.041698  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:17.056649  186547 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:17.056792  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:17.056840  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:17.099143  186547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:17.099233  186547 ssh_runner.go:195] Run: which lz4
	I1028 12:16:17.103882  186547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:16:17.108660  186547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:16:17.108699  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.559178  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:19.560739  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:16.842086  185546 main.go:141] libmachine: (no-preload-871884) Waiting to get IP...
	I1028 12:16:16.843056  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:16.843514  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:16.843599  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:16.843484  187500 retry.go:31] will retry after 240.188984ms: waiting for machine to come up
	I1028 12:16:17.085193  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.085702  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.085739  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.085649  187500 retry.go:31] will retry after 361.44193ms: waiting for machine to come up
	I1028 12:16:17.448961  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.449619  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.449645  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.449576  187500 retry.go:31] will retry after 386.179326ms: waiting for machine to come up
	I1028 12:16:17.837097  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.837879  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.837907  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.837834  187500 retry.go:31] will retry after 531.12665ms: waiting for machine to come up
	I1028 12:16:18.370266  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:18.370803  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:18.370834  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:18.370746  187500 retry.go:31] will retry after 760.20134ms: waiting for machine to come up
	I1028 12:16:19.132853  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.133415  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.133444  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.133360  187500 retry.go:31] will retry after 817.773678ms: waiting for machine to come up
	I1028 12:16:19.952317  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.952800  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.952824  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.952760  187500 retry.go:31] will retry after 861.798605ms: waiting for machine to come up
	I1028 12:16:20.816156  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:20.816794  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:20.816826  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:20.816750  187500 retry.go:31] will retry after 908.062214ms: waiting for machine to come up
	I1028 12:16:18.686980  186547 crio.go:462] duration metric: took 1.583134893s to copy over tarball
	I1028 12:16:18.687053  186547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:16:21.016264  186547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329174428s)
	I1028 12:16:21.016309  186547 crio.go:469] duration metric: took 2.329300291s to extract the tarball
	I1028 12:16:21.016322  186547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:16:21.053950  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:21.112876  186547 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:16:21.112903  186547 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:16:21.112914  186547 kubeadm.go:934] updating node { 192.168.50.75 8444 v1.31.2 crio true true} ...
	I1028 12:16:21.113037  186547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-349222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:21.113119  186547 ssh_runner.go:195] Run: crio config
	I1028 12:16:21.179853  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:21.179877  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:21.179888  186547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:21.179907  186547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-349222 NodeName:default-k8s-diff-port-349222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:21.180039  186547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-349222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.75"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:21.180117  186547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:21.191650  186547 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:21.191721  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:21.201670  186547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 12:16:21.220426  186547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:21.240774  186547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:16:21.263336  186547 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:21.267818  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:21.281577  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:21.441517  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:21.464117  186547 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222 for IP: 192.168.50.75
	I1028 12:16:21.464145  186547 certs.go:194] generating shared ca certs ...
	I1028 12:16:21.464167  186547 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:21.464392  186547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:21.464458  186547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:21.464485  186547 certs.go:256] generating profile certs ...
	I1028 12:16:21.464599  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.key
	I1028 12:16:21.464691  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key.e54e33e0
	I1028 12:16:21.464749  186547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key
	I1028 12:16:21.464919  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:21.464967  186547 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:21.464981  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:21.465006  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:21.465031  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:21.465069  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:21.465124  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:21.465976  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:21.511145  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:21.572071  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:21.613442  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:21.655508  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:16:21.687378  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:16:21.713227  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:21.738909  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:21.765274  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:21.792427  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:21.817632  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:21.842996  186547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:21.861059  186547 ssh_runner.go:195] Run: openssl version
	I1028 12:16:21.867814  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:21.880769  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886245  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886325  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.893179  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:21.908974  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:21.926992  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932350  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932428  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.939073  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:21.952302  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:21.965485  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971486  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971564  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.978531  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:21.995399  186547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:22.001453  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:22.009449  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:22.016898  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:22.024410  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:22.033151  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:22.040981  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:22.048298  186547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:22.048441  186547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:22.048531  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.095210  186547 cri.go:89] found id: ""
	I1028 12:16:22.095319  186547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:22.111740  186547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:22.111772  186547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:22.111828  186547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:22.122472  186547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:22.123648  186547 kubeconfig.go:125] found "default-k8s-diff-port-349222" server: "https://192.168.50.75:8444"
	I1028 12:16:22.126117  186547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:22.137057  186547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.75
	I1028 12:16:22.137096  186547 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:22.137108  186547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:22.137179  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.180526  186547 cri.go:89] found id: ""
	I1028 12:16:22.180638  186547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:22.197697  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:22.208176  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:22.208197  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:22.208246  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:16:22.218379  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:22.218438  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:22.228844  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:16:22.239330  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:22.239407  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:22.250200  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.260309  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:22.260374  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.271041  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:16:22.281556  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:22.281637  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:22.294003  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:22.305123  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:22.426791  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.058068  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:24.059822  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:21.726767  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:21.727332  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:21.727373  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:21.727224  187500 retry.go:31] will retry after 1.684184533s: waiting for machine to come up
	I1028 12:16:23.412691  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:23.413228  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:23.413254  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:23.413177  187500 retry.go:31] will retry after 1.416062445s: waiting for machine to come up
	I1028 12:16:24.830846  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:24.831450  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:24.831480  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:24.831393  187500 retry.go:31] will retry after 2.716897952s: waiting for machine to come up
	I1028 12:16:23.288371  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.506229  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.575063  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.644776  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:23.644896  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.145579  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.645050  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.666456  186547 api_server.go:72] duration metric: took 1.021679294s to wait for apiserver process to appear ...
	I1028 12:16:24.666493  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:24.666518  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:24.667086  186547 api_server.go:269] stopped: https://192.168.50.75:8444/healthz: Get "https://192.168.50.75:8444/healthz": dial tcp 192.168.50.75:8444: connect: connection refused
	I1028 12:16:25.166765  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.336957  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.337000  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.337015  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.382075  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.382110  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.667083  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.671910  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:28.671935  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.167591  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.173364  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:29.173397  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.666902  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.672205  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:16:29.679964  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:16:29.680002  186547 api_server.go:131] duration metric: took 5.013500479s to wait for apiserver health ...
	I1028 12:16:29.680014  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:29.680032  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:29.681992  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:26.558629  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.560116  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:27.550893  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:27.551454  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:27.551476  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:27.551438  187500 retry.go:31] will retry after 2.986712877s: waiting for machine to come up
	I1028 12:16:30.539999  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:30.540601  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:30.540632  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:30.540526  187500 retry.go:31] will retry after 3.947007446s: waiting for machine to come up
	I1028 12:16:29.683325  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:16:29.697362  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:16:29.717296  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:16:29.726327  186547 system_pods.go:59] 8 kube-system pods found
	I1028 12:16:29.726363  186547 system_pods.go:61] "coredns-7c65d6cfc9-k5h7n" [e203fcce-1a8a-431b-a816-d75b33ca9417] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:16:29.726374  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [2214daee-0302-44cd-9297-836eeb011232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:16:29.726391  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [c4331c24-07e2-4b50-ab04-31bcd00960e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:16:29.726402  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [9dddd9fb-ad03-4771-af1b-d9e1e024af52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:16:29.726413  186547 system_pods.go:61] "kube-proxy-bqq65" [ed5d0c14-0ddb-4446-a2f7-ae457d629fb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:16:29.726423  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [9cfcc366-038f-43a9-b919-48742fa419af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:16:29.726434  186547 system_pods.go:61] "metrics-server-6867b74b74-cgkz9" [3d919412-efb8-4030-a5d0-3c325c824c48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:16:29.726445  186547 system_pods.go:61] "storage-provisioner" [613b003c-1eee-4294-947f-ea7a21edc8d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:16:29.726464  186547 system_pods.go:74] duration metric: took 9.135782ms to wait for pod list to return data ...
	I1028 12:16:29.726478  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:16:29.729971  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:16:29.729996  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:16:29.730009  186547 node_conditions.go:105] duration metric: took 3.525858ms to run NodePressure ...
	I1028 12:16:29.730035  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:30.043775  186547 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048614  186547 kubeadm.go:739] kubelet initialised
	I1028 12:16:30.048638  186547 kubeadm.go:740] duration metric: took 4.83853ms waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048647  186547 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:16:30.053908  186547 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:32.063283  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.057577  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:33.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:35.557338  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:34.491658  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492175  185546 main.go:141] libmachine: (no-preload-871884) Found IP for machine: 192.168.72.156
	I1028 12:16:34.492197  185546 main.go:141] libmachine: (no-preload-871884) Reserving static IP address...
	I1028 12:16:34.492215  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has current primary IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492674  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.492704  185546 main.go:141] libmachine: (no-preload-871884) Reserved static IP address: 192.168.72.156
	I1028 12:16:34.492739  185546 main.go:141] libmachine: (no-preload-871884) DBG | skip adding static IP to network mk-no-preload-871884 - found existing host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"}
	I1028 12:16:34.492763  185546 main.go:141] libmachine: (no-preload-871884) DBG | Getting to WaitForSSH function...
	I1028 12:16:34.492777  185546 main.go:141] libmachine: (no-preload-871884) Waiting for SSH to be available...
	I1028 12:16:34.495176  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495502  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.495536  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495682  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH client type: external
	I1028 12:16:34.495714  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa (-rw-------)
	I1028 12:16:34.495747  185546 main.go:141] libmachine: (no-preload-871884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:34.495770  185546 main.go:141] libmachine: (no-preload-871884) DBG | About to run SSH command:
	I1028 12:16:34.495796  185546 main.go:141] libmachine: (no-preload-871884) DBG | exit 0
	I1028 12:16:34.625650  185546 main.go:141] libmachine: (no-preload-871884) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:34.625959  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetConfigRaw
	I1028 12:16:34.626602  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.629137  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629442  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.629477  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629733  185546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:16:34.629938  185546 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:34.629957  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:34.630153  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.632415  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.632777  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.632804  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.633033  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.633247  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633422  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633592  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.633762  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.633954  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.633968  185546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:34.738368  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:34.738406  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738696  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:16:34.738729  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738926  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.741750  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742216  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.742322  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742339  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.742538  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742689  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742857  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.743032  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.743248  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.743266  185546 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-871884 && echo "no-preload-871884" | sudo tee /etc/hostname
	I1028 12:16:34.863767  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-871884
	
	I1028 12:16:34.863802  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.867136  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867530  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.867561  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867822  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.868039  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868251  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868430  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.868634  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.868880  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.868905  185546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-871884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-871884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-871884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:34.989420  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:34.989450  185546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:34.989468  185546 buildroot.go:174] setting up certificates
	I1028 12:16:34.989476  185546 provision.go:84] configureAuth start
	I1028 12:16:34.989485  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.989790  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.992627  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.992977  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.993007  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.993225  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.995586  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.995888  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.995911  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.996122  185546 provision.go:143] copyHostCerts
	I1028 12:16:34.996190  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:34.996204  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:34.996261  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:34.996375  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:34.996384  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:34.996408  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:34.996472  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:34.996482  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:34.996499  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:34.996559  185546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.no-preload-871884 san=[127.0.0.1 192.168.72.156 localhost minikube no-preload-871884]
	I1028 12:16:35.437900  185546 provision.go:177] copyRemoteCerts
	I1028 12:16:35.437961  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:35.437985  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.440936  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441329  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.441361  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441555  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.441756  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.441921  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.442085  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.524911  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:35.554631  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:16:35.586946  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:16:35.620121  185546 provision.go:87] duration metric: took 630.630531ms to configureAuth
	I1028 12:16:35.620155  185546 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:35.620395  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:35.620502  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.623316  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623607  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.623643  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623886  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.624099  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624290  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624433  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.624612  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:35.624794  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:35.624810  185546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:35.886145  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:35.886178  185546 machine.go:96] duration metric: took 1.256224912s to provisionDockerMachine
	I1028 12:16:35.886196  185546 start.go:293] postStartSetup for "no-preload-871884" (driver="kvm2")
	I1028 12:16:35.886209  185546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:35.886232  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:35.886615  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:35.886653  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.889615  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890016  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.890048  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.890459  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.890654  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.890798  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.977889  185546 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:35.983360  185546 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:35.983387  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:35.983454  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:35.983543  185546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:35.983674  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:35.997400  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:36.025665  185546 start.go:296] duration metric: took 139.454088ms for postStartSetup
	I1028 12:16:36.025714  185546 fix.go:56] duration metric: took 20.538525254s for fixHost
	I1028 12:16:36.025739  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.028490  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.028933  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.028964  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.029170  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.029386  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029573  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029734  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.029909  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:36.030087  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:36.030098  185546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:36.138559  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117796.101397993
	
	I1028 12:16:36.138589  185546 fix.go:216] guest clock: 1730117796.101397993
	I1028 12:16:36.138599  185546 fix.go:229] Guest: 2024-10-28 12:16:36.101397993 +0000 UTC Remote: 2024-10-28 12:16:36.025719388 +0000 UTC m=+359.787107454 (delta=75.678605ms)
	I1028 12:16:36.138633  185546 fix.go:200] guest clock delta is within tolerance: 75.678605ms
	I1028 12:16:36.138638  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 20.651488254s
	I1028 12:16:36.138663  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.138953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:36.141711  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142144  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.142180  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142323  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.142975  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143165  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143240  185546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:36.143306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.143378  185546 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:36.143399  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.145980  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146166  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146348  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146375  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146507  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146617  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146657  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146701  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.146795  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146882  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.146953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.147013  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.147071  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.147202  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.223364  185546 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:36.246964  185546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:34.561016  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.564296  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.396734  185546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:36.403214  185546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:36.403298  185546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:36.421658  185546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:36.421695  185546 start.go:495] detecting cgroup driver to use...
	I1028 12:16:36.421772  185546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:36.441133  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:36.456750  185546 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:36.456806  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:36.473457  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:36.489210  185546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:36.621054  185546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:36.767341  185546 docker.go:233] disabling docker service ...
	I1028 12:16:36.767432  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:36.784655  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:36.799522  185546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:36.942312  185546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:37.066636  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:37.082284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:37.102462  185546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:37.102530  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.113687  185546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:37.113760  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.125624  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.137036  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.148417  185546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:37.160015  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.171382  185546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.192342  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.204353  185546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:37.215188  185546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:37.215275  185546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:37.230653  185546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:37.241484  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:37.382996  185546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:37.479263  185546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:37.479363  185546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:37.485265  185546 start.go:563] Will wait 60s for crictl version
	I1028 12:16:37.485330  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:37.489545  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:37.536126  185546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:37.536212  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.567538  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.600370  185546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.559329  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:40.057624  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:37.601686  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:37.604235  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604568  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:37.604601  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604782  185546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:37.609354  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:37.624966  185546 kubeadm.go:883] updating cluster {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:37.625081  185546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:37.625117  185546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:37.664112  185546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:37.664149  185546 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:16:37.664262  185546 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.664306  185546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.664334  185546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 12:16:37.664311  185546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.664352  185546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.664393  185546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.664434  185546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.664399  185546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666080  185546 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.666083  185546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.666081  185546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.666142  185546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.666085  185546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 12:16:37.666079  185546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.666185  185546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666398  185546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.840639  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.857089  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.859107  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 12:16:37.859358  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.863640  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.867925  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.876221  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.921581  185546 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 12:16:37.921638  185546 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.921689  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.042970  185546 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 12:16:38.043015  185546 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.043068  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093917  185546 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 12:16:38.093954  185546 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 12:16:38.093973  185546 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.093985  185546 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.094029  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094038  185546 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 12:16:38.094057  185546 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.094087  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.094094  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094030  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093976  185546 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 12:16:38.094143  185546 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.094152  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.094175  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.110134  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.110302  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.188922  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.188979  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.193920  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.193929  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.292698  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.325562  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.331855  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.332873  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.345880  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.345951  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.414842  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.470776  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.470949  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 12:16:38.471044  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.481197  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 12:16:38.481333  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:38.503147  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 12:16:38.503171  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:38.532884  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 12:16:38.533001  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:38.552405  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 12:16:38.552417  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 12:16:38.552472  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552485  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 12:16:38.552523  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:38.552529  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552552  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 12:16:38.552527  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 12:16:38.552598  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 12:16:38.829851  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127678  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.575124569s)
	I1028 12:16:41.127722  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 12:16:41.127744  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.575188461s)
	I1028 12:16:41.127775  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 12:16:41.127785  185546 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.297902587s)
	I1028 12:16:41.127803  185546 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127818  185546 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 12:16:41.127850  185546 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127858  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127895  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:39.064564  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:41.563643  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.058025  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:44.557594  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.190694  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062807881s)
	I1028 12:16:43.190736  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 12:16:43.190752  185546 ssh_runner.go:235] Completed: which crictl: (2.062836368s)
	I1028 12:16:43.190773  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:43.190827  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:43.190831  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:45.281583  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.090685426s)
	I1028 12:16:45.281620  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 12:16:45.281650  185546 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281679  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.090821035s)
	I1028 12:16:45.281698  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281750  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:45.325500  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:42.565395  186547 pod_ready.go:93] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.565425  186547 pod_ready.go:82] duration metric: took 12.511487215s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.565438  186547 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572364  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.572388  186547 pod_ready.go:82] duration metric: took 6.941356ms for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572402  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579074  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.579099  186547 pod_ready.go:82] duration metric: took 6.689137ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579116  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584088  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.584108  186547 pod_ready.go:82] duration metric: took 4.985095ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584118  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588810  186547 pod_ready.go:93] pod "kube-proxy-bqq65" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.588837  186547 pod_ready.go:82] duration metric: took 4.711896ms for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588849  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758349  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:43.758376  186547 pod_ready.go:82] duration metric: took 1.169519383s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758387  186547 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:45.766209  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.059150  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.556589  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.174287  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.84875195s)
	I1028 12:16:49.174340  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 12:16:49.174291  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.892568087s)
	I1028 12:16:49.174422  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 12:16:49.174427  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:49.174466  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:49.174524  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:48.265641  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:50.271513  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.557320  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.557540  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:51.438821  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.26426785s)
	I1028 12:16:51.438857  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 12:16:51.438890  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264449757s)
	I1028 12:16:51.438893  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:51.438911  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 12:16:51.438945  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:52.890902  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451935078s)
	I1028 12:16:52.890933  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 12:16:52.890960  185546 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:52.891010  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:53.643145  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 12:16:53.643208  185546 cache_images.go:123] Successfully loaded all cached images
	I1028 12:16:53.643216  185546 cache_images.go:92] duration metric: took 15.979050279s to LoadCachedImages
	I1028 12:16:53.643231  185546 kubeadm.go:934] updating node { 192.168.72.156 8443 v1.31.2 crio true true} ...
	I1028 12:16:53.643393  185546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-871884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:53.643480  185546 ssh_runner.go:195] Run: crio config
	I1028 12:16:53.701778  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:16:53.701805  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:53.701814  185546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:53.701836  185546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.156 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-871884 NodeName:no-preload-871884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:53.701952  185546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-871884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:53.702019  185546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:53.714245  185546 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:53.714327  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:53.725610  185546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 12:16:53.745071  185546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:53.766897  185546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:16:53.787043  185546 ssh_runner.go:195] Run: grep 192.168.72.156	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:53.791580  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:53.805088  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:53.945235  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:53.964073  185546 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884 for IP: 192.168.72.156
	I1028 12:16:53.964099  185546 certs.go:194] generating shared ca certs ...
	I1028 12:16:53.964115  185546 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:53.964290  185546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:53.964338  185546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:53.964355  185546 certs.go:256] generating profile certs ...
	I1028 12:16:53.964458  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.key
	I1028 12:16:53.964533  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key.6934b48e
	I1028 12:16:53.964584  185546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key
	I1028 12:16:53.964719  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:53.964750  185546 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:53.964765  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:53.964801  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:53.964831  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:53.964866  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:53.964921  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:53.965632  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:54.004592  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:54.044270  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:54.079496  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:54.114473  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:16:54.141836  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:54.175201  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:54.202282  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:54.227874  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:54.254818  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:54.282950  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:54.310204  185546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:54.328834  185546 ssh_runner.go:195] Run: openssl version
	I1028 12:16:54.335391  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:54.347474  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352687  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352755  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.358834  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:54.373155  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:54.387035  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392179  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392281  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.398488  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:54.412352  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:54.426361  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431415  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431470  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.437583  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:54.450708  185546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:54.456625  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:54.463458  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:54.469939  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:54.477873  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:54.484962  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:54.491679  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:54.498106  185546 kubeadm.go:392] StartCluster: {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:54.498211  185546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:54.498287  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.543142  185546 cri.go:89] found id: ""
	I1028 12:16:54.543250  185546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:54.555948  185546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:54.555971  185546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:54.556021  185546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:54.566954  185546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:54.567990  185546 kubeconfig.go:125] found "no-preload-871884" server: "https://192.168.72.156:8443"
	I1028 12:16:54.570149  185546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:54.581005  185546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.156
	I1028 12:16:54.581039  185546 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:54.581051  185546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:54.581100  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.622676  185546 cri.go:89] found id: ""
	I1028 12:16:54.622742  185546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:54.642427  185546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:54.655104  185546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:54.655131  185546 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:54.655199  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:54.665367  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:54.665432  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:54.675664  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:54.685921  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:54.685997  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:54.698451  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.709982  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:54.710060  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.721243  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:54.731699  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:54.731780  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:54.743365  185546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:54.754284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:54.868055  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.645470  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.858805  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.940632  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:56.020654  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:56.020735  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.764963  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:54.766822  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.768500  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.058614  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.556085  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:00.556460  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.521589  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.021710  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.066266  185546 api_server.go:72] duration metric: took 1.045608096s to wait for apiserver process to appear ...
	I1028 12:16:57.066305  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:57.066326  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:16:57.066862  185546 api_server.go:269] stopped: https://192.168.72.156:8443/healthz: Get "https://192.168.72.156:8443/healthz": dial tcp 192.168.72.156:8443: connect: connection refused
	I1028 12:16:57.567124  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.159147  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.159179  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.159193  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.171505  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.171530  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.566560  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.570920  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:00.570947  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.066537  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.071173  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.071205  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.566517  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.577822  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.577851  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:02.066514  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:02.071117  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:17:02.078265  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:17:02.078293  185546 api_server.go:131] duration metric: took 5.011981306s to wait for apiserver health ...
	I1028 12:17:02.078302  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:17:02.078308  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:17:02.080348  185546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:59.267565  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:01.766399  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.081626  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:17:02.103809  185546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:17:02.135225  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:17:02.152051  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:17:02.152102  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:17:02.152113  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:17:02.152125  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:17:02.152133  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:17:02.152146  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:17:02.152159  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:17:02.152167  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:17:02.152174  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:17:02.152183  185546 system_pods.go:74] duration metric: took 16.930389ms to wait for pod list to return data ...
	I1028 12:17:02.152192  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:17:02.157475  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:17:02.157504  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:17:02.157515  185546 node_conditions.go:105] duration metric: took 5.317861ms to run NodePressure ...
	I1028 12:17:02.157548  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:17:02.476553  185546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482764  185546 kubeadm.go:739] kubelet initialised
	I1028 12:17:02.482789  185546 kubeadm.go:740] duration metric: took 6.205425ms waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482798  185546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:02.487480  185546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.495454  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495482  185546 pod_ready.go:82] duration metric: took 7.976331ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.495495  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495505  185546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.499904  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499931  185546 pod_ready.go:82] duration metric: took 4.41555ms for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.499941  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499948  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.504272  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504300  185546 pod_ready.go:82] duration metric: took 4.345522ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.504325  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504337  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.538786  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538826  185546 pod_ready.go:82] duration metric: took 34.474629ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.538841  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538851  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.939462  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939490  185546 pod_ready.go:82] duration metric: took 400.627739ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.939502  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939511  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.339338  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339369  185546 pod_ready.go:82] duration metric: took 399.848996ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.339384  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339394  185546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.739585  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739640  185546 pod_ready.go:82] duration metric: took 400.235271ms for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.739655  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739665  185546 pod_ready.go:39] duration metric: took 1.256859696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:03.739682  185546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:17:03.755064  185546 ops.go:34] apiserver oom_adj: -16
	I1028 12:17:03.755086  185546 kubeadm.go:597] duration metric: took 9.199108841s to restartPrimaryControlPlane
	I1028 12:17:03.755096  185546 kubeadm.go:394] duration metric: took 9.256999682s to StartCluster
	I1028 12:17:03.755111  185546 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.755175  185546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:17:03.757048  185546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.757327  185546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:17:03.757425  185546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:17:03.757535  185546 addons.go:69] Setting storage-provisioner=true in profile "no-preload-871884"
	I1028 12:17:03.757563  185546 addons.go:234] Setting addon storage-provisioner=true in "no-preload-871884"
	I1028 12:17:03.757565  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:17:03.757589  185546 addons.go:69] Setting metrics-server=true in profile "no-preload-871884"
	I1028 12:17:03.757617  185546 addons.go:234] Setting addon metrics-server=true in "no-preload-871884"
	I1028 12:17:03.757568  185546 addons.go:69] Setting default-storageclass=true in profile "no-preload-871884"
	W1028 12:17:03.757626  185546 addons.go:243] addon metrics-server should already be in state true
	I1028 12:17:03.757635  185546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-871884"
	W1028 12:17:03.757573  185546 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:17:03.757669  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.757713  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.758051  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758093  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758196  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758233  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758231  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758355  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.759378  185546 out.go:177] * Verifying Kubernetes components...
	I1028 12:17:03.761108  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:17:03.786180  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1028 12:17:03.786344  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1028 12:17:03.787005  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787096  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.787658  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.788034  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.789126  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.789149  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.789333  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.789366  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.790199  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.790591  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.793866  185546 addons.go:234] Setting addon default-storageclass=true in "no-preload-871884"
	W1028 12:17:03.793890  185546 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:17:03.793920  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.794332  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.794384  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.806461  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I1028 12:17:03.806960  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.807572  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1028 12:17:03.807644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.807835  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808074  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.808188  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.808349  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.808603  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.808624  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808993  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.809610  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.809665  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.810531  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.812676  185546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:17:03.813307  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1028 12:17:03.813821  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.814228  185546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:03.814248  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:17:03.814266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.814350  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.814373  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.814848  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.815284  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.815323  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.817336  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817751  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.817776  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817889  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.818079  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.818219  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.818357  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.830425  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1028 12:17:03.830940  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.831486  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.831507  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.831905  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.832125  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.834275  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.835260  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1028 12:17:03.835687  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.836180  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.836200  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.836527  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.836604  185546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:17:03.836741  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.838273  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:17:03.838290  185546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:17:03.838306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.838508  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.839044  185546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:03.839060  185546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:17:03.839080  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.842836  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843272  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.843291  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843461  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.843598  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.843767  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.843774  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843909  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.844312  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.844330  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.845228  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.845354  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.845474  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.845623  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.981979  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:17:04.003932  185546 node_ready.go:35] waiting up to 6m0s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:04.071389  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:04.169654  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:04.186781  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:17:04.186808  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:17:04.252889  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:17:04.252921  185546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:17:04.315140  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.315166  185546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:17:04.395995  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.489084  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489122  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489426  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.489445  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489470  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.489481  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489490  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489763  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489781  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.497272  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.497297  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.497647  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.497677  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.497702  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185405  185546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015712456s)
	I1028 12:17:05.185458  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185469  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.185749  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.185768  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185778  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185786  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.186142  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.186160  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.186149  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.294924  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.294953  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295282  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295301  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295319  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295329  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.295339  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295584  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295615  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295622  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295641  185546 addons.go:475] Verifying addon metrics-server=true in "no-preload-871884"
	I1028 12:17:05.297689  185546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 12:17:02.557465  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:04.557517  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:05.298945  185546 addons.go:510] duration metric: took 1.541528913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 12:17:06.008731  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.766439  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:06.267839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:06.558978  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:09.058557  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:08.507261  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:10.508790  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:11.007666  185546 node_ready.go:49] node "no-preload-871884" has status "Ready":"True"
	I1028 12:17:11.007698  185546 node_ready.go:38] duration metric: took 7.003728813s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:11.007710  185546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:11.014677  185546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020020  185546 pod_ready.go:93] pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:11.020042  185546 pod_ready.go:82] duration metric: took 5.339994ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020053  185546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:08.765053  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.766104  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:11.556955  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.560411  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.027402  185546 pod_ready.go:103] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:14.026501  185546 pod_ready.go:93] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.026537  185546 pod_ready.go:82] duration metric: took 3.006475793s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.026552  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036355  185546 pod_ready.go:93] pod "kube-apiserver-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.036379  185546 pod_ready.go:82] duration metric: took 9.819102ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036391  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042711  185546 pod_ready.go:93] pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.042734  185546 pod_ready.go:82] duration metric: took 6.336523ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042745  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047387  185546 pod_ready.go:93] pod "kube-proxy-6rc4l" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.047409  185546 pod_ready.go:82] duration metric: took 4.657388ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047422  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208217  185546 pod_ready.go:93] pod "kube-scheduler-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.208243  185546 pod_ready.go:82] duration metric: took 160.813834ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208254  185546 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:16.214834  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.268493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:15.271377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.056296  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.557898  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.215767  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:20.219613  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:17.764493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.766909  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:21.769560  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:17:21.057114  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:23.058470  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:25.556210  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:22.714692  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.717301  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.269572  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:26.765467  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:28.056563  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:30.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:27.215356  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.216505  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.267239  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.765267  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:33.055776  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.056525  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.714959  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:33.715292  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.715824  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:34.267155  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:36.765199  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:37.557428  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.057039  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:37.715996  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.213916  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.765841  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:41.267477  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:42.556504  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.557351  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:42.216159  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.216286  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:43.764776  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.265654  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:46.565643  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.057078  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.716667  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.216308  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:48.267160  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.764737  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:51.058399  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.556450  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:55.557043  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:51.715736  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.215689  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:52.764839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.766020  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:57.269346  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:57.557519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.057963  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:56.715168  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:58.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.215445  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:59.271418  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.766158  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:02.556743  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.055348  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.217504  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.715462  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.768899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:06.266111  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:07.055842  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.057870  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:07.716593  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.215089  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:08.266990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.765441  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:11.556422  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.056678  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.216340  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.715086  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.766493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.766870  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.264729  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:16.057216  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.555816  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:20.556292  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.215803  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.215927  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.265178  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.268144  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:22.556987  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.057115  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.715576  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.715814  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.716043  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.767435  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:26.269805  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:27.556197  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.057036  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.216535  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.715902  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.765730  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:31.267947  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:32.556500  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.557446  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.214627  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:35.214782  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.772856  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:36.265722  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:36.559507  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.056993  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:37.215498  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.715541  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:38.266461  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.266611  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:42.268638  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:41.555847  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.558407  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:41.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.716370  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:46.214690  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:44.765752  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:47.266258  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.056747  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.058377  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.557006  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.715456  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.716120  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.267222  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:51.765814  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:18:52.557045  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.057031  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:53.215817  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.714784  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:54.268377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:56.764471  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:57.556098  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.057165  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:57.716269  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.214149  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:58.765585  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:01.265119  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.556763  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.557107  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:02.216779  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.714940  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:03.267189  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.268332  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:07.056718  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:09.056994  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:06.715788  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.716850  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:11.215701  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:07.765389  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:10.268458  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:11.559182  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.057546  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:13.216963  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:15.715550  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:12.764850  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.766597  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.265687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:16.057835  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:18.556414  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.716374  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.216629  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:19.266849  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:21.267040  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:21.056990  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.057233  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:25.555717  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:22.715323  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:24.715818  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.765935  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:26.264839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:27.556370  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.057786  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:27.216093  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:29.715861  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:28.265912  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.266993  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:32.267566  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:32.555888  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.556782  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:31.716178  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.213813  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.213999  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.764307  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.766217  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:37.056333  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.559461  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:38.214406  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:40.214795  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.264988  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:41.267329  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.056568  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:44.057256  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:42.715261  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.215472  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:43.765595  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.766087  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:46.058519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.556533  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:47.215644  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:49.715563  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.266115  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:50.268535  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:52.271227  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.056089  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:53.056973  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:55.057853  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:51.716787  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.214943  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.765208  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:57.267687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:19:57.556056  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.557091  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:00.050404  185942 pod_ready.go:82] duration metric: took 4m0.000726571s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:00.050457  185942 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:00.050479  185942 pod_ready.go:39] duration metric: took 4m12.759391454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:00.050506  185942 kubeadm.go:597] duration metric: took 4m20.427916933s to restartPrimaryControlPlane
	W1028 12:20:00.050569  185942 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:00.050616  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:19:56.715048  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.215821  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.769397  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:02.265702  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:01.715264  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.216628  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.765386  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:06.765802  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:06.715439  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.214561  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.216343  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.265899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.764867  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:13.716362  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.214893  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:14.264333  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.765340  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:18.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:20.715790  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:19.270934  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:21.764931  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:22.715880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:25.216499  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:23.766240  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.271567  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.353961  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.303321788s)
	I1028 12:20:26.354038  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:26.373066  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:26.386209  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:26.398568  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:26.398591  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:26.398634  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:26.410916  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:26.410976  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:26.423771  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:26.435883  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:26.435961  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:26.448506  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.460449  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:26.460506  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.472817  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:26.483653  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:26.483743  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:26.494435  185942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:26.682378  185942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:27.715587  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:29.717407  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:28.766206  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:30.766289  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.820344  185942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:20:35.820446  185942 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:20:35.820555  185942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:20:35.820688  185942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:20:35.820812  185942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:20:35.820902  185942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:20:35.823423  185942 out.go:235]   - Generating certificates and keys ...
	I1028 12:20:35.823594  185942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:20:35.823700  185942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:20:35.823804  185942 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:20:35.823893  185942 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:20:35.824001  185942 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:20:35.824082  185942 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:20:35.824167  185942 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:20:35.824255  185942 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:20:35.824360  185942 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:20:35.824445  185942 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:20:35.824504  185942 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:20:35.824566  185942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:20:35.824622  185942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:20:35.824725  185942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:20:35.824805  185942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:20:35.824944  185942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:20:35.825058  185942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:20:35.825209  185942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:20:35.825300  185942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:20:35.826890  185942 out.go:235]   - Booting up control plane ...
	I1028 12:20:35.827007  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:20:35.827077  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:20:35.827142  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:20:35.827285  185942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:20:35.827420  185942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:20:35.827487  185942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:20:35.827705  185942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:20:35.827848  185942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:20:35.827943  185942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.264999ms
	I1028 12:20:35.828059  185942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:20:35.828130  185942 kubeadm.go:310] [api-check] The API server is healthy after 5.502732581s
	I1028 12:20:35.828299  185942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:20:35.828472  185942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:20:35.828523  185942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:20:35.828712  185942 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:20:35.828764  185942 kubeadm.go:310] [bootstrap-token] Using token: srdxzz.lxk56bs7sgkeocij
	I1028 12:20:35.830228  185942 out.go:235]   - Configuring RBAC rules ...
	I1028 12:20:35.830335  185942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:20:35.830422  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:20:35.830563  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:20:35.830729  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:20:35.830842  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:20:35.830928  185942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:20:35.831065  185942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:20:35.831122  185942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:20:35.831174  185942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:20:35.831181  185942 kubeadm.go:310] 
	I1028 12:20:35.831229  185942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:20:35.831237  185942 kubeadm.go:310] 
	I1028 12:20:35.831302  185942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:20:35.831313  185942 kubeadm.go:310] 
	I1028 12:20:35.831356  185942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:20:35.831439  185942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:20:35.831517  185942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:20:35.831531  185942 kubeadm.go:310] 
	I1028 12:20:35.831616  185942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:20:35.831628  185942 kubeadm.go:310] 
	I1028 12:20:35.831678  185942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:20:35.831682  185942 kubeadm.go:310] 
	I1028 12:20:35.831730  185942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:20:35.831809  185942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:20:35.831921  185942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:20:35.831933  185942 kubeadm.go:310] 
	I1028 12:20:35.832041  185942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:20:35.832141  185942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:20:35.832150  185942 kubeadm.go:310] 
	I1028 12:20:35.832249  185942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832373  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:20:35.832404  185942 kubeadm.go:310] 	--control-plane 
	I1028 12:20:35.832414  185942 kubeadm.go:310] 
	I1028 12:20:35.832516  185942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:20:35.832524  185942 kubeadm.go:310] 
	I1028 12:20:35.832642  185942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832812  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:20:35.832833  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:20:35.832843  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:20:35.834428  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:20:35.835603  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:20:35.847857  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:20:35.867921  185942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:20:35.868088  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:35.868107  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:20:35.908233  185942 ops.go:34] apiserver oom_adj: -16
	I1028 12:20:32.215299  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:34.716880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:32.766922  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.267132  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:36.121114  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:36.621188  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.122032  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.621405  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.122105  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.621960  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.122142  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.622093  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.121643  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.287609  185942 kubeadm.go:1113] duration metric: took 4.419612649s to wait for elevateKubeSystemPrivileges
	I1028 12:20:40.287656  185942 kubeadm.go:394] duration metric: took 5m0.720591132s to StartCluster
	I1028 12:20:40.287703  185942 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.287814  185942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:20:40.290472  185942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.290787  185942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:20:40.291051  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:20:40.290926  185942 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:20:40.291125  185942 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709250"
	I1028 12:20:40.291126  185942 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709250"
	I1028 12:20:40.291142  185942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709250"
	I1028 12:20:40.291148  185942 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709250"
	W1028 12:20:40.291158  185942 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:20:40.291182  185942 addons.go:69] Setting metrics-server=true in profile "embed-certs-709250"
	I1028 12:20:40.291220  185942 addons.go:234] Setting addon metrics-server=true in "embed-certs-709250"
	W1028 12:20:40.291233  185942 addons.go:243] addon metrics-server should already be in state true
	I1028 12:20:40.291282  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291195  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291593  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291631  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291727  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291771  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291786  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291813  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.292877  185942 out.go:177] * Verifying Kubernetes components...
	I1028 12:20:40.294858  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:20:40.310225  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1028 12:20:40.310814  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.311524  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.311552  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.311961  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.312174  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.312867  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1028 12:20:40.312901  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I1028 12:20:40.313354  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313389  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313964  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.313987  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.313967  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.314040  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.314365  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314428  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314883  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.314907  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.315710  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.315744  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.316210  185942 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709250"
	W1028 12:20:40.316229  185942 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:20:40.316261  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.316619  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.316648  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.331940  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1028 12:20:40.332732  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.333487  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.333537  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.333932  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.334145  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.336054  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1028 12:20:40.336291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.336441  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337079  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.337117  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.337211  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I1028 12:20:40.337597  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337998  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338171  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.338189  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.338291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.338925  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338972  185942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:20:40.339570  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.339621  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.340197  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.341080  185942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.341099  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:20:40.341115  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.341872  185942 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:20:40.343244  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:20:40.343278  185942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:20:40.343308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.344718  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345186  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.345216  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345457  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.345666  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.345842  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.346053  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.346977  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347514  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.347546  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347739  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.347936  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.348069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.348236  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.357912  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I1028 12:20:40.358358  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.358838  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.358858  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.359224  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.359441  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.361308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.361630  185942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.361654  185942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:20:40.361675  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.365789  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366319  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.366347  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366659  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.366879  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.367069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.367245  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.526205  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:20:40.545404  185942 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555003  185942 node_ready.go:49] node "embed-certs-709250" has status "Ready":"True"
	I1028 12:20:40.555028  185942 node_ready.go:38] duration metric: took 9.592797ms for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555047  185942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:40.564021  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:40.660020  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:20:40.660061  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:20:40.666435  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.691423  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.692384  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:20:40.692411  185942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:20:40.739518  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:40.739549  185942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:20:40.765228  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:37.216347  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:39.716471  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.192384  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192422  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192491  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192514  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192740  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192759  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192783  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192791  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192915  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192942  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192951  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192962  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.193093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193125  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193131  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.193373  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193403  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193409  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.229776  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.229808  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.230111  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.230127  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.624688  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.624714  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625048  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.625055  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625066  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625074  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.625081  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625283  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625312  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625325  185942 addons.go:475] Verifying addon metrics-server=true in "embed-certs-709250"
	I1028 12:20:41.625329  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.627194  185942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:20:37.771166  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:40.265616  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.265990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.628572  185942 addons.go:510] duration metric: took 1.337655555s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:20:42.572801  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.571062  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.571095  185942 pod_ready.go:82] duration metric: took 3.007040788s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.571110  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576592  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.576620  185942 pod_ready.go:82] duration metric: took 5.500425ms for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576633  185942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:45.583586  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.216524  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:44.715547  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.758721  186547 pod_ready.go:82] duration metric: took 4m0.000295852s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:43.758758  186547 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:43.758783  186547 pod_ready.go:39] duration metric: took 4m13.710127509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:43.758811  186547 kubeadm.go:597] duration metric: took 4m21.647032906s to restartPrimaryControlPlane
	W1028 12:20:43.758873  186547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:43.758910  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:47.089478  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.089502  185942 pod_ready.go:82] duration metric: took 3.512861746s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.089512  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094229  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.094255  185942 pod_ready.go:82] duration metric: took 4.736326ms for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094267  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098823  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.098859  185942 pod_ready.go:82] duration metric: took 4.584003ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098872  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104063  185942 pod_ready.go:93] pod "kube-proxy-gck6r" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.104083  185942 pod_ready.go:82] duration metric: took 5.204526ms for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104091  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168177  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.168210  185942 pod_ready.go:82] duration metric: took 64.110225ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168221  185942 pod_ready.go:39] duration metric: took 6.613160968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:47.168243  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:20:47.168309  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:47.186907  185942 api_server.go:72] duration metric: took 6.896070864s to wait for apiserver process to appear ...
	I1028 12:20:47.186944  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:20:47.186998  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:20:47.191428  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:20:47.192677  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:20:47.192708  185942 api_server.go:131] duration metric: took 5.753471ms to wait for apiserver health ...
	I1028 12:20:47.192719  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:20:47.372534  185942 system_pods.go:59] 9 kube-system pods found
	I1028 12:20:47.372571  185942 system_pods.go:61] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.372580  185942 system_pods.go:61] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.372585  185942 system_pods.go:61] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.372590  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.372595  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.372599  185942 system_pods.go:61] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.372605  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.372614  185942 system_pods.go:61] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.372620  185942 system_pods.go:61] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.372633  185942 system_pods.go:74] duration metric: took 179.905205ms to wait for pod list to return data ...
	I1028 12:20:47.372647  185942 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:20:47.569853  185942 default_sa.go:45] found service account: "default"
	I1028 12:20:47.569886  185942 default_sa.go:55] duration metric: took 197.228265ms for default service account to be created ...
	I1028 12:20:47.569900  185942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:20:47.770906  185942 system_pods.go:86] 9 kube-system pods found
	I1028 12:20:47.770941  185942 system_pods.go:89] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.770948  185942 system_pods.go:89] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.770953  185942 system_pods.go:89] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.770956  185942 system_pods.go:89] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.770960  185942 system_pods.go:89] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.770964  185942 system_pods.go:89] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.770967  185942 system_pods.go:89] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.770973  185942 system_pods.go:89] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.770977  185942 system_pods.go:89] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.770984  185942 system_pods.go:126] duration metric: took 201.078078ms to wait for k8s-apps to be running ...
	I1028 12:20:47.770990  185942 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:20:47.771033  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:47.787139  185942 system_svc.go:56] duration metric: took 16.13776ms WaitForService to wait for kubelet
	I1028 12:20:47.787171  185942 kubeadm.go:582] duration metric: took 7.496343244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:20:47.787191  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:20:47.969485  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:20:47.969516  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:20:47.969547  185942 node_conditions.go:105] duration metric: took 182.350787ms to run NodePressure ...
	I1028 12:20:47.969562  185942 start.go:241] waiting for startup goroutines ...
	I1028 12:20:47.969572  185942 start.go:246] waiting for cluster config update ...
	I1028 12:20:47.969586  185942 start.go:255] writing updated cluster config ...
	I1028 12:20:47.969916  185942 ssh_runner.go:195] Run: rm -f paused
	I1028 12:20:48.021806  185942 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:20:48.023816  185942 out.go:177] * Done! kubectl is now configured to use "embed-certs-709250" cluster and "default" namespace by default
	I1028 12:20:46.716844  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:49.216673  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:51.715101  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:53.715509  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:56.217201  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:58.715405  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:00.715890  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:03.214669  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:05.215054  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.108895  186547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.349960271s)
	I1028 12:21:10.108979  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:10.126064  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:21:10.139862  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:21:10.150752  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:21:10.150780  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:21:10.150837  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:21:10.161522  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:21:10.161604  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:21:10.172230  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:21:10.183231  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:21:10.183299  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:21:10.194261  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.204462  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:21:10.204524  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.214991  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:21:10.225246  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:21:10.225315  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:21:10.235439  186547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:21:10.280951  186547 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:21:10.281020  186547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:21:10.391997  186547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:21:10.392163  186547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:21:10.392297  186547 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:21:10.402113  186547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:21:07.217549  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:09.716985  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.404087  186547 out.go:235]   - Generating certificates and keys ...
	I1028 12:21:10.404194  186547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:21:10.404312  186547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:21:10.404441  186547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:21:10.404537  186547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:21:10.404642  186547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:21:10.404719  186547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:21:10.404824  186547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:21:10.404914  186547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:21:10.405021  186547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:21:10.405124  186547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:21:10.405185  186547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:21:10.405269  186547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:21:10.608657  186547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:21:10.910608  186547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:21:11.076768  186547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:21:11.244109  186547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:21:11.685910  186547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:21:11.686470  186547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:21:11.692266  186547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:21:11.694100  186547 out.go:235]   - Booting up control plane ...
	I1028 12:21:11.694231  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:21:11.694377  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:21:11.694607  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:21:11.713908  186547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:21:11.720788  186547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:21:11.720874  186547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:21:11.856867  186547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:21:11.856998  186547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:21:12.358968  186547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942759ms
	I1028 12:21:12.359067  186547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:21:12.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:14.208408  185546 pod_ready.go:82] duration metric: took 4m0.000135609s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:21:14.208447  185546 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:21:14.208457  185546 pod_ready.go:39] duration metric: took 4m3.200735753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:14.208485  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:14.208519  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:14.208571  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:14.266154  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.266184  185546 cri.go:89] found id: ""
	I1028 12:21:14.266196  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:14.266255  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.271416  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:14.271497  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:14.310426  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.310457  185546 cri.go:89] found id: ""
	I1028 12:21:14.310467  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:14.310529  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.314961  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:14.315037  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:14.362502  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.362530  185546 cri.go:89] found id: ""
	I1028 12:21:14.362540  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:14.362602  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.368118  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:14.368198  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:14.416827  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.416867  185546 cri.go:89] found id: ""
	I1028 12:21:14.416877  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:14.416943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.421640  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:14.421716  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:14.473506  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:14.473552  185546 cri.go:89] found id: ""
	I1028 12:21:14.473563  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:14.473627  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.480106  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:14.480183  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:14.529939  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:14.529964  185546 cri.go:89] found id: ""
	I1028 12:21:14.529971  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:14.530120  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.536199  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:14.536264  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:14.578374  185546 cri.go:89] found id: ""
	I1028 12:21:14.578407  185546 logs.go:282] 0 containers: []
	W1028 12:21:14.578419  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:14.578428  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:14.578490  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:14.620216  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:14.620243  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:14.620249  185546 cri.go:89] found id: ""
	I1028 12:21:14.620258  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:14.620323  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.625798  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.630653  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:14.630683  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:14.645364  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:14.645404  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.686202  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:14.686234  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.730094  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:14.730125  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:14.786272  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:14.786322  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:14.875705  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:14.875746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.931913  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:14.931960  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.991914  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:14.991953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:15.037022  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:15.037056  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:15.107597  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:15.107649  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:15.161401  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:15.161442  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:15.201916  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:15.201953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:15.682647  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:15.682694  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:17.861193  186547 kubeadm.go:310] [api-check] The API server is healthy after 5.502448006s
	I1028 12:21:17.874856  186547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:21:17.889216  186547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:21:17.933411  186547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:21:17.933726  186547 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-349222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:21:17.964667  186547 kubeadm.go:310] [bootstrap-token] Using token: o3vo7c.1x7759cggrb8kl7r
	I1028 12:21:17.966405  186547 out.go:235]   - Configuring RBAC rules ...
	I1028 12:21:17.966590  186547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:21:17.982231  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:21:17.991850  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:21:17.996073  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:21:18.003531  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:21:18.008369  186547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:21:18.272751  186547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:21:18.724493  186547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:21:19.269583  186547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:21:19.270654  186547 kubeadm.go:310] 
	I1028 12:21:19.270715  186547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:21:19.270722  186547 kubeadm.go:310] 
	I1028 12:21:19.270782  186547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:21:19.270787  186547 kubeadm.go:310] 
	I1028 12:21:19.270816  186547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:21:19.270875  186547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:21:19.270938  186547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:21:19.270949  186547 kubeadm.go:310] 
	I1028 12:21:19.271022  186547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:21:19.271063  186547 kubeadm.go:310] 
	I1028 12:21:19.271165  186547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:21:19.271190  186547 kubeadm.go:310] 
	I1028 12:21:19.271266  186547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:21:19.271380  186547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:21:19.271470  186547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:21:19.271479  186547 kubeadm.go:310] 
	I1028 12:21:19.271600  186547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:21:19.271697  186547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:21:19.271709  186547 kubeadm.go:310] 
	I1028 12:21:19.271838  186547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272010  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:21:19.272068  186547 kubeadm.go:310] 	--control-plane 
	I1028 12:21:19.272079  186547 kubeadm.go:310] 
	I1028 12:21:19.272250  186547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:21:19.272270  186547 kubeadm.go:310] 
	I1028 12:21:19.272391  186547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272568  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:21:19.273899  186547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:21:19.273955  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:21:19.273977  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:21:19.275868  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:21:18.355132  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:18.373260  185546 api_server.go:72] duration metric: took 4m14.615888944s to wait for apiserver process to appear ...
	I1028 12:21:18.373292  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:18.373353  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:18.373410  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:18.413207  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.413239  185546 cri.go:89] found id: ""
	I1028 12:21:18.413250  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:18.413336  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.419588  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:18.419655  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:18.476341  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.476373  185546 cri.go:89] found id: ""
	I1028 12:21:18.476383  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:18.476450  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.482835  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:18.482926  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:18.524934  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.524964  185546 cri.go:89] found id: ""
	I1028 12:21:18.524975  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:18.525040  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.530198  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:18.530284  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:18.577310  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:18.577338  185546 cri.go:89] found id: ""
	I1028 12:21:18.577349  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:18.577413  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.583048  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:18.583133  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:18.622556  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:18.622587  185546 cri.go:89] found id: ""
	I1028 12:21:18.622598  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:18.622701  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.628450  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:18.628540  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:18.674827  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:18.674861  185546 cri.go:89] found id: ""
	I1028 12:21:18.674873  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:18.674943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.680282  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:18.680354  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:18.738014  185546 cri.go:89] found id: ""
	I1028 12:21:18.738044  185546 logs.go:282] 0 containers: []
	W1028 12:21:18.738061  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:18.738070  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:18.738142  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:18.780615  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:18.780645  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:18.780651  185546 cri.go:89] found id: ""
	I1028 12:21:18.780660  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:18.780725  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.786003  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.790208  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:18.790231  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:18.806481  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:18.806523  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.853343  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:18.853382  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.906386  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:18.906424  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.948149  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:18.948182  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:19.000642  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:19.000678  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:19.038715  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:19.038744  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:19.079234  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:19.079271  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:19.147309  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:19.147349  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:19.271582  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:19.271620  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:19.319149  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:19.319195  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:19.385399  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:19.385437  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:19.811993  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:19.812035  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:19.277402  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:21:19.296307  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:21:19.323315  186547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-349222 minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=default-k8s-diff-port-349222 minikube.k8s.io/primary=true
	I1028 12:21:19.550855  186547 ops.go:34] apiserver oom_adj: -16
	I1028 12:21:19.550882  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.051004  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.551001  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.051215  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.551283  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.050989  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.551423  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.051101  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.151453  186547 kubeadm.go:1113] duration metric: took 3.828156807s to wait for elevateKubeSystemPrivileges
	I1028 12:21:23.151505  186547 kubeadm.go:394] duration metric: took 5m1.103220882s to StartCluster
	I1028 12:21:23.151530  186547 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.151623  186547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:21:23.153557  186547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.153874  186547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:21:23.153996  186547 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:21:23.154101  186547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154122  186547 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154133  186547 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:21:23.154128  186547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154165  186547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-349222"
	I1028 12:21:23.154160  186547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154197  186547 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154213  186547 addons.go:243] addon metrics-server should already be in state true
	I1028 12:21:23.154167  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154254  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154664  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154679  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154749  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154135  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:21:23.154803  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154844  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154948  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.155649  186547 out.go:177] * Verifying Kubernetes components...
	I1028 12:21:23.157234  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:21:23.172278  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I1028 12:21:23.172870  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.173402  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.173429  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.173851  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.174056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.176299  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1028 12:21:23.176307  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I1028 12:21:23.176897  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177023  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177553  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177576  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177589  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177606  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177887  186547 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.177912  186547 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:21:23.177945  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.177971  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178030  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178369  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178404  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178541  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178572  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178961  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.179002  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.196089  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1028 12:21:23.197979  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.198578  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.198607  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.199082  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.199301  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.199604  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1028 12:21:23.200120  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.200519  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.200539  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.200938  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.201204  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.201711  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.201794  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1028 12:21:23.202225  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.202937  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.202956  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.203305  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.203753  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.203791  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.204026  186547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:21:23.204210  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.205470  186547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:21:23.205490  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:21:23.205554  186547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:21:23.205576  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.207334  186547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.207352  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:21:23.207372  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.209573  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.210230  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210366  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.210608  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.210806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.211061  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.211884  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.211910  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.211928  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.212104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.212351  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.212570  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.212762  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.231664  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1028 12:21:23.232283  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.232904  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.232929  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.233414  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.233658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.236162  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.236665  186547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.236680  186547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:21:23.236700  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.240368  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.240697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240848  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.241034  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.241156  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.241281  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.409461  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:21:23.430686  186547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442439  186547 node_ready.go:49] node "default-k8s-diff-port-349222" has status "Ready":"True"
	I1028 12:21:23.442466  186547 node_ready.go:38] duration metric: took 11.749381ms for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442480  186547 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:23.447741  186547 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:23.515393  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.545556  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.575253  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:21:23.575280  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:21:23.663892  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:21:23.663920  186547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:21:23.745621  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:23.745656  186547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:21:23.823360  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:24.391754  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.391789  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.392092  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.392112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.392123  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.392130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393697  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393716  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.393725  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.393733  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393810  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393828  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393886  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394088  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.394112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.413957  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.414000  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.414363  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.414385  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853053  186547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029641945s)
	I1028 12:21:24.853107  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853123  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853434  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.853492  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853501  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853518  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853543  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853784  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853801  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853813  186547 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-349222"
	I1028 12:21:24.855707  186547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:21:22.373623  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:21:22.379559  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:21:22.380750  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:22.380772  185546 api_server.go:131] duration metric: took 4.007460794s to wait for apiserver health ...
	I1028 12:21:22.380783  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:22.380811  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:22.380875  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:22.426678  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:22.426710  185546 cri.go:89] found id: ""
	I1028 12:21:22.426720  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:22.426781  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.431942  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:22.432014  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:22.472504  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:22.472531  185546 cri.go:89] found id: ""
	I1028 12:21:22.472540  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:22.472595  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.478446  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:22.478511  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:22.520149  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.520169  185546 cri.go:89] found id: ""
	I1028 12:21:22.520177  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:22.520235  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.525716  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:22.525804  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:22.564801  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:22.564832  185546 cri.go:89] found id: ""
	I1028 12:21:22.564844  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:22.564909  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.570065  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:22.570147  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:22.613601  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.613628  185546 cri.go:89] found id: ""
	I1028 12:21:22.613637  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:22.613700  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.618413  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:22.618483  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:22.664329  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.664358  185546 cri.go:89] found id: ""
	I1028 12:21:22.664369  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:22.664430  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.669013  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:22.669084  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:22.706046  185546 cri.go:89] found id: ""
	I1028 12:21:22.706074  185546 logs.go:282] 0 containers: []
	W1028 12:21:22.706084  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:22.706091  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:22.706159  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:22.747718  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.747744  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.747750  185546 cri.go:89] found id: ""
	I1028 12:21:22.747759  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:22.747825  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.752857  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.758383  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:22.758410  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.800846  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:22.800882  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.858663  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:22.858702  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.896915  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:22.896959  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.938476  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:22.938503  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.984601  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:22.984628  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:23.000223  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:23.000259  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:23.130709  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:23.130746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:23.189821  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:23.189859  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:23.244463  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:23.244535  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:23.299279  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:23.299318  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:23.714691  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:23.714730  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:23.777703  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:23.777749  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:26.364133  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:21:26.364166  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.364171  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.364175  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.364179  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.364182  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.364185  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.364191  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.364195  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.364201  185546 system_pods.go:74] duration metric: took 3.98341316s to wait for pod list to return data ...
	I1028 12:21:26.364209  185546 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:26.366899  185546 default_sa.go:45] found service account: "default"
	I1028 12:21:26.366925  185546 default_sa.go:55] duration metric: took 2.710943ms for default service account to be created ...
	I1028 12:21:26.366934  185546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:26.371193  185546 system_pods.go:86] 8 kube-system pods found
	I1028 12:21:26.371219  185546 system_pods.go:89] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.371224  185546 system_pods.go:89] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.371228  185546 system_pods.go:89] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.371233  185546 system_pods.go:89] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.371237  185546 system_pods.go:89] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.371240  185546 system_pods.go:89] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.371246  185546 system_pods.go:89] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.371250  185546 system_pods.go:89] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.371257  185546 system_pods.go:126] duration metric: took 4.318058ms to wait for k8s-apps to be running ...
	I1028 12:21:26.371265  185546 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:26.371317  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:26.389093  185546 system_svc.go:56] duration metric: took 17.81758ms WaitForService to wait for kubelet
	I1028 12:21:26.389131  185546 kubeadm.go:582] duration metric: took 4m22.631766189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:26.389158  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:26.392700  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:26.392728  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:26.392741  185546 node_conditions.go:105] duration metric: took 3.576663ms to run NodePressure ...
	I1028 12:21:26.392757  185546 start.go:241] waiting for startup goroutines ...
	I1028 12:21:26.392766  185546 start.go:246] waiting for cluster config update ...
	I1028 12:21:26.392781  185546 start.go:255] writing updated cluster config ...
	I1028 12:21:26.393086  185546 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:26.444274  185546 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:26.446322  185546 out.go:177] * Done! kubectl is now configured to use "no-preload-871884" cluster and "default" namespace by default
	I1028 12:21:24.856866  186547 addons.go:510] duration metric: took 1.702877543s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:21:25.462800  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:27.954511  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:30.454530  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.455161  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.955218  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.955242  186547 pod_ready.go:82] duration metric: took 9.507473956s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.955253  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.960990  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.961018  186547 pod_ready.go:82] duration metric: took 5.757431ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.961032  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966957  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.966981  186547 pod_ready.go:82] duration metric: took 5.940549ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966991  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972168  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.972194  186547 pod_ready.go:82] duration metric: took 5.195057ms for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972205  186547 pod_ready.go:39] duration metric: took 9.529713389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:32.972224  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:32.972294  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:32.988675  186547 api_server.go:72] duration metric: took 9.83476496s to wait for apiserver process to appear ...
	I1028 12:21:32.988711  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:32.988736  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:21:32.993068  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:21:32.994352  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:32.994377  186547 api_server.go:131] duration metric: took 5.656136ms to wait for apiserver health ...
	I1028 12:21:32.994387  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:32.999982  186547 system_pods.go:59] 9 kube-system pods found
	I1028 12:21:33.000010  186547 system_pods.go:61] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.000017  186547 system_pods.go:61] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.000024  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.000029  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.000033  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.000037  186547 system_pods.go:61] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.000040  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.000046  186547 system_pods.go:61] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.000051  186547 system_pods.go:61] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.000064  186547 system_pods.go:74] duration metric: took 5.66991ms to wait for pod list to return data ...
	I1028 12:21:33.000075  186547 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:33.003124  186547 default_sa.go:45] found service account: "default"
	I1028 12:21:33.003149  186547 default_sa.go:55] duration metric: took 3.067652ms for default service account to be created ...
	I1028 12:21:33.003159  186547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:33.155864  186547 system_pods.go:86] 9 kube-system pods found
	I1028 12:21:33.155902  186547 system_pods.go:89] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.155914  186547 system_pods.go:89] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.155921  186547 system_pods.go:89] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.155931  186547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.155938  186547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.155943  186547 system_pods.go:89] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.155948  186547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.155956  186547 system_pods.go:89] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.155965  186547 system_pods.go:89] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.155977  186547 system_pods.go:126] duration metric: took 152.809784ms to wait for k8s-apps to be running ...
	I1028 12:21:33.155991  186547 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:33.156049  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:33.171592  186547 system_svc.go:56] duration metric: took 15.589436ms WaitForService to wait for kubelet
	I1028 12:21:33.171647  186547 kubeadm.go:582] duration metric: took 10.017726239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:33.171672  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:33.352932  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:33.352984  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:33.352995  186547 node_conditions.go:105] duration metric: took 181.317488ms to run NodePressure ...
	I1028 12:21:33.353006  186547 start.go:241] waiting for startup goroutines ...
	I1028 12:21:33.353014  186547 start.go:246] waiting for cluster config update ...
	I1028 12:21:33.353024  186547 start.go:255] writing updated cluster config ...
	I1028 12:21:33.353314  186547 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:33.405276  186547 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:33.407589  186547 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-349222" cluster and "default" namespace by default
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.162371337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118590162335390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b5db5f9-2363-4743-b4b5-c454ce3d4647 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.163245610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cff9e1a-7a8c-4389-b420-2925037f1edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.163343082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cff9e1a-7a8c-4389-b420-2925037f1edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.163633375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cff9e1a-7a8c-4389-b420-2925037f1edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.174796483Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-28T12:20:41.367757055Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=9f6dc70d-601b-4d69-9934-bfbe12e38702 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.174892934Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:27" id=9f6dc70d-601b-4d69-9934-bfbe12e38702 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.175018302Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.175233186Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:97" id=9f6dc70d-601b-4d69-9934-bfbe12e38702 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.175266417Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:111" id=9f6dc70d-601b-4d69-9934-bfbe12e38702 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.175288882Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:33" id=9f6dc70d-601b-4d69-9934-bfbe12e38702 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.175340546Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9f6dc70d-601b-4d69-9934-bfbe12e38702 name=/runtime.v1.ImageService/ImageStatus
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.206611847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a49b4486-75fe-43aa-a4d4-8fa19b4e189b name=/runtime.v1.RuntimeService/Version
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.206698591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a49b4486-75fe-43aa-a4d4-8fa19b4e189b name=/runtime.v1.RuntimeService/Version
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.208388934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7f21b6b-4ea2-4408-828e-a5bd1262cf9b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.208790942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118590208768623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7f21b6b-4ea2-4408-828e-a5bd1262cf9b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.209464297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5658a337-4c80-4c78-aad8-a3906b856398 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.209537200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5658a337-4c80-4c78-aad8-a3906b856398 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.209739619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5658a337-4c80-4c78-aad8-a3906b856398 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.249673597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11c6d442-41a0-4f77-b1f6-b83597027074 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.249766946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11c6d442-41a0-4f77-b1f6-b83597027074 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.251615024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=594d8ca7-7436-4197-9878-94cf54f8beb1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.252002860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118590251981381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=594d8ca7-7436-4197-9878-94cf54f8beb1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.252641848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=480ba1b0-6ef8-4281-8fca-6302547c4e60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.252730520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=480ba1b0-6ef8-4281-8fca-6302547c4e60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:29:50 embed-certs-709250 crio[706]: time="2024-10-28 12:29:50.252945175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=480ba1b0-6ef8-4281-8fca-6302547c4e60 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66cad90f41b3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b763e86b15fbb       coredns-7c65d6cfc9-p59fl
	a806d8aeab6c3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   d8346dff9c0fd       coredns-7c65d6cfc9-sx86n
	1a152ca26f66c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   3bb168d9739ed       kube-proxy-gck6r
	14eb80a56c8ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   081abd61e8838       storage-provisioner
	be038350ba056       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   4ca30b73fad62       etcd-embed-certs-709250
	b6ec6c57ee1eb       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   2b5ab72e16072       kube-controller-manager-embed-certs-709250
	6c09319a03cd6       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   9d26c05742878       kube-apiserver-embed-certs-709250
	30e6fb27555e9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   822adcfd48466       kube-scheduler-embed-certs-709250
	a285c6010e358       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   c1a27a87cb0a2       kube-apiserver-embed-certs-709250
	
	
	==> coredns [66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-709250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-709250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=embed-certs-709250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:20:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-709250
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:29:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:25:52 +0000   Mon, 28 Oct 2024 12:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:25:52 +0000   Mon, 28 Oct 2024 12:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:25:52 +0000   Mon, 28 Oct 2024 12:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:25:52 +0000   Mon, 28 Oct 2024 12:20:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    embed-certs-709250
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6e4b62e9df843e4bbd9e383d70b7bdb
	  System UUID:                e6e4b62e-9df8-43e4-bbd9-e383d70b7bdb
	  Boot ID:                    33d35854-6802-40c2-bc8d-c766fd7fca9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-p59fl                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-sx86n                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-709250                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-embed-certs-709250             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-709250    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-gck6r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-embed-certs-709250             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-wwlqv               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m7s                   kube-proxy       
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-709250 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-709250 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-709250 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet          Node embed-certs-709250 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet          Node embed-certs-709250 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet          Node embed-certs-709250 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node embed-certs-709250 event: Registered Node embed-certs-709250 in Controller
	
	
	==> dmesg <==
	[  +0.053106] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042646] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.956548] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.943177] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652934] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.766588] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.064740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065313] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.207859] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.126120] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.308135] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +4.434809] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +0.056785] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.122474] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +4.581784] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.850911] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 12:20] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.184049] systemd-fstab-generator[2552]: Ignoring "noauto" option for root device
	[  +4.494525] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.069154] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +5.534431] systemd-fstab-generator[3002]: Ignoring "noauto" option for root device
	[  +0.098216] kauditd_printk_skb: 14 callbacks suppressed
	[Oct28 12:21] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539] <==
	{"level":"info","ts":"2024-10-28T12:20:29.931897Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T12:20:29.932201Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d3f1da2044f49cdd","initial-advertise-peer-urls":["https://192.168.39.211:2380"],"listen-peer-urls":["https://192.168.39.211:2380"],"advertise-client-urls":["https://192.168.39.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:20:29.932253Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:20:29.932320Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-10-28T12:20:29.932343Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-10-28T12:20:30.156150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T12:20:30.156231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T12:20:30.156258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgPreVoteResp from d3f1da2044f49cdd at term 1"}
	{"level":"info","ts":"2024-10-28T12:20:30.156274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:20:30.156279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgVoteResp from d3f1da2044f49cdd at term 2"}
	{"level":"info","ts":"2024-10-28T12:20:30.156287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became leader at term 2"}
	{"level":"info","ts":"2024-10-28T12:20:30.156295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3f1da2044f49cdd elected leader d3f1da2044f49cdd at term 2"}
	{"level":"info","ts":"2024-10-28T12:20:30.160321Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:20:30.164017Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d3f1da2044f49cdd","local-member-attributes":"{Name:embed-certs-709250 ClientURLs:[https://192.168.39.211:2379]}","request-path":"/0/members/d3f1da2044f49cdd/attributes","cluster-id":"a3f4522b5c780b58","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:20:30.166283Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:20:30.173966Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:20:30.174013Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:20:30.174113Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:20:30.174177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:20:30.174227Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:20:30.174258Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:20:30.175162Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:20:30.175922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.211:2379"}
	{"level":"info","ts":"2024-10-28T12:20:30.196656Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:20:30.206291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:29:50 up 14 min,  0 users,  load average: 0.16, 0.20, 0.18
	Linux embed-certs-709250 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d] <==
	W1028 12:25:33.330194       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:25:33.330262       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:25:33.331340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:25:33.331416       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:26:33.331856       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:26:33.332037       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:26:33.332342       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:26:33.332433       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:26:33.333533       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:26:33.333576       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:28:33.334828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:28:33.335310       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:28:33.334828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:28:33.335515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:28:33.336711       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:28:33.336788       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe] <==
	W1028 12:20:22.188562       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.207375       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.213172       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.217804       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.229435       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.259347       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.294387       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.309384       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.386238       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.444648       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.459447       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.467170       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.468598       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.499317       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.526504       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.561710       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.647833       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.698537       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.797604       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.959630       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.979942       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.122195       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.167447       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.168838       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.256854       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047] <==
	E1028 12:24:39.144954       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:24:39.794185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:25:09.152631       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:25:09.803329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:25:39.160487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:25:39.811391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:25:52.847255       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-709250"
	E1028 12:26:09.166639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:26:09.819805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:26:35.195213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="329.038µs"
	E1028 12:26:39.178371       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:26:39.827688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:26:47.195447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="207.13µs"
	E1028 12:27:09.185478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:27:09.837321       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:27:39.192726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:27:39.845266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:28:09.199681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:28:09.853498       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:28:39.207172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:28:39.862649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:29:09.214364       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:29:09.870443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:29:39.222347       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:29:39.878583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:20:42.649400       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:20:42.664873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	E1028 12:20:42.664968       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:20:42.708116       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:20:42.708167       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:20:42.708200       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:20:42.711039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:20:42.711446       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:20:42.711475       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:20:42.712745       1 config.go:199] "Starting service config controller"
	I1028 12:20:42.712787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:20:42.712814       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:20:42.712818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:20:42.713359       1 config.go:328] "Starting node config controller"
	I1028 12:20:42.713391       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:20:42.813584       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:20:42.813672       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:20:42.813697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82] <==
	W1028 12:20:33.256472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:20:33.256521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.258720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:20:33.258809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.282338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.282586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.323223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.323341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.359951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:20:33.360789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.429978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 12:20:33.430191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.459404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 12:20:33.459646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.465840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.465875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.490925       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:20:33.491056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 12:20:33.658673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.658813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.677632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:20:33.677753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.685149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:20:33.685266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1028 12:20:35.450004       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:28:45 embed-certs-709250 kubelet[2879]: E1028 12:28:45.177561    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:28:45 embed-certs-709250 kubelet[2879]: E1028 12:28:45.365446    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118525365163961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:28:45 embed-certs-709250 kubelet[2879]: E1028 12:28:45.365495    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118525365163961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:28:55 embed-certs-709250 kubelet[2879]: E1028 12:28:55.367347    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118535366598558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:28:55 embed-certs-709250 kubelet[2879]: E1028 12:28:55.367833    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118535366598558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:28:57 embed-certs-709250 kubelet[2879]: E1028 12:28:57.176389    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:29:05 embed-certs-709250 kubelet[2879]: E1028 12:29:05.369729    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118545369377942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:05 embed-certs-709250 kubelet[2879]: E1028 12:29:05.370255    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118545369377942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:09 embed-certs-709250 kubelet[2879]: E1028 12:29:09.175291    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:29:15 embed-certs-709250 kubelet[2879]: E1028 12:29:15.371666    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118555371227531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:15 embed-certs-709250 kubelet[2879]: E1028 12:29:15.371927    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118555371227531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:24 embed-certs-709250 kubelet[2879]: E1028 12:29:24.175947    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:29:25 embed-certs-709250 kubelet[2879]: E1028 12:29:25.373979    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118565373610356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:25 embed-certs-709250 kubelet[2879]: E1028 12:29:25.374023    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118565373610356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]: E1028 12:29:35.176465    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]: E1028 12:29:35.196615    2879 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]: E1028 12:29:35.375936    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118575375666955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:35 embed-certs-709250 kubelet[2879]: E1028 12:29:35.375983    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118575375666955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:45 embed-certs-709250 kubelet[2879]: E1028 12:29:45.378545    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118585377750823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:45 embed-certs-709250 kubelet[2879]: E1028 12:29:45.378975    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118585377750823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:50 embed-certs-709250 kubelet[2879]: E1028 12:29:50.175632    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	
	
	==> storage-provisioner [14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4] <==
	I1028 12:20:42.407161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:20:42.537353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:20:42.537471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:20:42.557894       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:20:42.558671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb2d0ff7-983a-459a-a2dd-54680a334af3", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-709250_f4ce2431-4ded-4f52-8ad7-e27599efb83d became leader
	I1028 12:20:42.561426       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-709250_f4ce2431-4ded-4f52-8ad7-e27599efb83d!
	I1028 12:20:42.664512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-709250_f4ce2431-4ded-4f52-8ad7-e27599efb83d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709250 -n embed-certs-709250
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-709250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-wwlqv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-709250 describe pod metrics-server-6867b74b74-wwlqv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-709250 describe pod metrics-server-6867b74b74-wwlqv: exit status 1 (73.005234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-wwlqv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-709250 describe pod metrics-server-6867b74b74-wwlqv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1028 12:21:32.958594  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-871884 -n no-preload-871884
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 12:30:26.9932711 +0000 UTC m=+5749.661495486
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-871884 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-871884 logs -n 25: (2.054681085s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:13:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:13:02.452508  186547 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:13:02.452621  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452630  186547 out.go:358] Setting ErrFile to fd 2...
	I1028 12:13:02.452635  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452828  186547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:13:02.453378  186547 out.go:352] Setting JSON to false
	I1028 12:13:02.454320  186547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6925,"bootTime":1730110657,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:13:02.454420  186547 start.go:139] virtualization: kvm guest
	I1028 12:13:02.456754  186547 out.go:177] * [default-k8s-diff-port-349222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:13:02.458343  186547 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:13:02.458413  186547 notify.go:220] Checking for updates...
	I1028 12:13:02.460946  186547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:13:02.462089  186547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:13:02.463460  186547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:13:02.464649  186547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:13:02.466107  186547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:13:02.468142  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:13:02.468518  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.468587  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.483793  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1028 12:13:02.484273  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.484861  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.484884  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.485260  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.485471  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.485712  186547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:13:02.485997  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.486030  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.501110  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1028 12:13:02.501722  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.502335  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.502362  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.502682  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.502878  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.539766  186547 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:13:02.541024  186547 start.go:297] selected driver: kvm2
	I1028 12:13:02.541038  186547 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.541168  186547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:13:02.541929  186547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.542014  186547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:13:02.557443  186547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:13:02.557868  186547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:13:02.557902  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:13:02.557947  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:13:02.557987  186547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.558098  186547 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.560651  186547 out.go:177] * Starting "default-k8s-diff-port-349222" primary control-plane node in "default-k8s-diff-port-349222" cluster
	I1028 12:13:02.693744  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:02.561767  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:13:02.561800  186547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:13:02.561810  186547 cache.go:56] Caching tarball of preloaded images
	I1028 12:13:02.561877  186547 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:13:02.561887  186547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:13:02.561973  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:13:02.562165  186547 start.go:360] acquireMachinesLock for default-k8s-diff-port-349222: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:13:08.773770  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:11.845825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:17.925957  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:20.997733  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:27.077858  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:30.149737  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:36.229851  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:39.301764  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:45.381781  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:48.453767  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:54.533793  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:57.605754  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:03.685848  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:06.757874  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:12.837829  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:15.909778  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:21.989850  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:25.061812  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:31.141825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:34.213757  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:40.293844  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:43.365865  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:49.445872  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:52.517750  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:58.597834  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:01.669837  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:07.749853  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:10.821838  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:13.826298  185942 start.go:364] duration metric: took 3m37.788021766s to acquireMachinesLock for "embed-certs-709250"
	I1028 12:15:13.826369  185942 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:13.826382  185942 fix.go:54] fixHost starting: 
	I1028 12:15:13.827047  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:13.827113  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:13.842889  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1028 12:15:13.843403  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:13.843915  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:15:13.843938  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:13.844374  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:13.844568  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:13.844733  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:15:13.846440  185942 fix.go:112] recreateIfNeeded on embed-certs-709250: state=Stopped err=<nil>
	I1028 12:15:13.846464  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	W1028 12:15:13.846629  185942 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:13.848878  185942 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709250" ...
	I1028 12:15:13.850607  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Start
	I1028 12:15:13.850800  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring networks are active...
	I1028 12:15:13.851930  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network default is active
	I1028 12:15:13.852331  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network mk-embed-certs-709250 is active
	I1028 12:15:13.852652  185942 main.go:141] libmachine: (embed-certs-709250) Getting domain xml...
	I1028 12:15:13.853394  185942 main.go:141] libmachine: (embed-certs-709250) Creating domain...
	I1028 12:15:15.098667  185942 main.go:141] libmachine: (embed-certs-709250) Waiting to get IP...
	I1028 12:15:15.099525  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.099919  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.099951  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.099877  187018 retry.go:31] will retry after 285.25732ms: waiting for machine to come up
	I1028 12:15:15.386531  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.386992  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.387023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.386921  187018 retry.go:31] will retry after 327.08041ms: waiting for machine to come up
	I1028 12:15:15.715435  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.715900  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.715928  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.715846  187018 retry.go:31] will retry after 443.083162ms: waiting for machine to come up
	I1028 12:15:13.823652  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:13.823724  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824056  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:15:13.824085  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824284  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:15:13.826158  185546 machine.go:96] duration metric: took 4m37.413470632s to provisionDockerMachine
	I1028 12:15:13.826202  185546 fix.go:56] duration metric: took 4m37.436313043s for fixHost
	I1028 12:15:13.826208  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 4m37.436350273s
	W1028 12:15:13.826226  185546 start.go:714] error starting host: provision: host is not running
	W1028 12:15:13.826336  185546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 12:15:13.826346  185546 start.go:729] Will try again in 5 seconds ...
	I1028 12:15:16.160595  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.161024  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.161045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.161003  187018 retry.go:31] will retry after 599.535995ms: waiting for machine to come up
	I1028 12:15:16.761771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.762167  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.762213  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.762114  187018 retry.go:31] will retry after 527.275961ms: waiting for machine to come up
	I1028 12:15:17.290788  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:17.291124  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:17.291145  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:17.291098  187018 retry.go:31] will retry after 858.175967ms: waiting for machine to come up
	I1028 12:15:18.150644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.151045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.151071  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.150996  187018 retry.go:31] will retry after 727.962346ms: waiting for machine to come up
	I1028 12:15:18.880545  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.880990  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.881020  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.880942  187018 retry.go:31] will retry after 1.184956373s: waiting for machine to come up
	I1028 12:15:20.067178  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:20.067603  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:20.067635  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:20.067553  187018 retry.go:31] will retry after 1.635056202s: waiting for machine to come up
	I1028 12:15:18.827987  185546 start.go:360] acquireMachinesLock for no-preload-871884: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:21.703941  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:21.704338  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:21.704365  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:21.704302  187018 retry.go:31] will retry after 1.865473383s: waiting for machine to come up
	I1028 12:15:23.572362  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:23.572816  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:23.572843  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:23.572773  187018 retry.go:31] will retry after 2.604970031s: waiting for machine to come up
	I1028 12:15:26.181289  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:26.181849  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:26.181880  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:26.181788  187018 retry.go:31] will retry after 2.866004055s: waiting for machine to come up
	I1028 12:15:29.049604  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:29.050029  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:29.050068  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:29.049970  187018 retry.go:31] will retry after 3.046879869s: waiting for machine to come up
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:32.100252  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.100812  185942 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:15:32.100831  185942 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:15:32.100842  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.101552  185942 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:15:32.101568  185942 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:15:32.101602  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.101629  185942 main.go:141] libmachine: (embed-certs-709250) DBG | skip adding static IP to network mk-embed-certs-709250 - found existing host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"}
	I1028 12:15:32.101644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:15:32.104041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.104356  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104459  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:15:32.104488  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:15:32.104519  185942 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:32.104530  185942 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:15:32.104538  185942 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:15:32.233966  185942 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:32.234363  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:15:32.235010  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.237443  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.237755  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.237783  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.238040  185942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:15:32.238272  185942 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:32.238291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:32.238541  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.240765  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241165  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.241212  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241333  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.241520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241704  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241836  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.241989  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.242234  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.242247  185942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:32.358412  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:32.358443  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.358773  185942 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:15:32.358810  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.359027  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.361776  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362122  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.362161  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362262  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.362429  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362579  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362709  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.362867  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.363084  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.363098  185942 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:15:32.492437  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:15:32.492466  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.495108  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495394  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.495438  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495586  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.495771  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.495927  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.496054  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.496215  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.496399  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.496416  185942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:32.619038  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:32.619074  185942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:32.619113  185942 buildroot.go:174] setting up certificates
	I1028 12:15:32.619125  185942 provision.go:84] configureAuth start
	I1028 12:15:32.619137  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.619451  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.622055  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622448  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.622479  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622593  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.624610  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625037  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.625066  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625086  185942 provision.go:143] copyHostCerts
	I1028 12:15:32.625174  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:32.625190  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:32.625259  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:32.625396  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:32.625407  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:32.625444  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:32.625519  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:32.625541  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:32.625575  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:32.625645  185942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:15:32.684483  185942 provision.go:177] copyRemoteCerts
	I1028 12:15:32.684547  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:32.684576  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.686926  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687244  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.687284  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687427  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.687617  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.687744  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.687890  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:32.776282  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:32.802180  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:15:32.829609  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:32.854274  185942 provision.go:87] duration metric: took 235.133526ms to configureAuth
	I1028 12:15:32.854305  185942 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:32.854474  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:15:32.854547  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.857363  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.857736  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.857771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.858038  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.858251  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858442  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858652  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.858809  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.858979  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.858996  185942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:33.101841  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:33.101870  185942 machine.go:96] duration metric: took 863.584969ms to provisionDockerMachine
	I1028 12:15:33.101883  185942 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:15:33.101896  185942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:33.101911  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.102249  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:33.102285  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.105023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.105357  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105493  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.105710  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.105881  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.106032  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.193225  185942 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:33.197548  185942 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:33.197570  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:33.197637  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:33.197739  185942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:33.197861  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:33.207962  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:33.231808  185942 start.go:296] duration metric: took 129.908529ms for postStartSetup
	I1028 12:15:33.231853  185942 fix.go:56] duration metric: took 19.405472942s for fixHost
	I1028 12:15:33.231875  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.234609  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.234943  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.234979  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.235167  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.235370  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235642  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.235806  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:33.236026  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:33.236041  185942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:33.350639  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117733.322211717
	
	I1028 12:15:33.350663  185942 fix.go:216] guest clock: 1730117733.322211717
	I1028 12:15:33.350673  185942 fix.go:229] Guest: 2024-10-28 12:15:33.322211717 +0000 UTC Remote: 2024-10-28 12:15:33.231858201 +0000 UTC m=+237.345598419 (delta=90.353516ms)
	I1028 12:15:33.350707  185942 fix.go:200] guest clock delta is within tolerance: 90.353516ms
	I1028 12:15:33.350714  185942 start.go:83] releasing machines lock for "embed-certs-709250", held for 19.524379046s
	I1028 12:15:33.350737  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.350974  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:33.353647  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354012  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.354041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354244  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354690  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354873  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354973  185942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:33.355017  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.355090  185942 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:33.355116  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.357679  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358050  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358074  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358242  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358389  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.358542  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.358584  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358616  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358681  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.358721  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358892  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.359048  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.359197  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.443468  185942 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:33.498501  185942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:33.642221  185942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:33.649269  185942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:33.649336  185942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:33.665990  185942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:33.666023  185942 start.go:495] detecting cgroup driver to use...
	I1028 12:15:33.666103  185942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:33.683188  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:33.699441  185942 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:33.699506  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:33.714192  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:33.728325  185942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:33.850801  185942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:34.028929  185942 docker.go:233] disabling docker service ...
	I1028 12:15:34.028991  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:34.045600  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:34.059450  185942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:34.182310  185942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:34.305346  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:34.322354  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:34.342738  185942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:15:34.342804  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.354622  185942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:34.354687  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.365663  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.376503  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.388360  185942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:34.399960  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.419087  185942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.439700  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.451425  185942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:34.461657  185942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:34.461710  185942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:34.476292  185942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:34.487186  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:34.614984  185942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:34.709983  185942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:34.710061  185942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:34.715204  185942 start.go:563] Will wait 60s for crictl version
	I1028 12:15:34.715268  185942 ssh_runner.go:195] Run: which crictl
	I1028 12:15:34.719459  185942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:34.760330  185942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:34.760407  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.788635  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.820113  185942 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:15:34.821282  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:34.824384  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.824719  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:34.824746  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.825032  185942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:34.829502  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:34.842695  185942 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:34.842845  185942 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:15:34.842897  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:34.881154  185942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:15:34.881218  185942 ssh_runner.go:195] Run: which lz4
	I1028 12:15:34.885630  185942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:34.890045  185942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:34.890075  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:36.448704  185942 crio.go:462] duration metric: took 1.563096609s to copy over tarball
	I1028 12:15:36.448792  185942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:38.703177  185942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25435315s)
	I1028 12:15:38.703207  185942 crio.go:469] duration metric: took 2.254465841s to extract the tarball
	I1028 12:15:38.703217  185942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:38.741005  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:38.788350  185942 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:15:38.788376  185942 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:15:38.788383  185942 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:15:38.788491  185942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:15:38.788558  185942 ssh_runner.go:195] Run: crio config
	I1028 12:15:38.835642  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:38.835667  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:38.835678  185942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:15:38.835701  185942 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:15:38.835822  185942 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:15:38.835879  185942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:15:38.846832  185942 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:15:38.846925  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:15:38.857103  185942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:15:38.874531  185942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:15:38.892213  185942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:15:38.910949  185942 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:15:38.915391  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:38.928874  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:39.045969  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:15:39.063425  185942 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:15:39.063449  185942 certs.go:194] generating shared ca certs ...
	I1028 12:15:39.063465  185942 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:15:39.063638  185942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:15:39.063693  185942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:15:39.063709  185942 certs.go:256] generating profile certs ...
	I1028 12:15:39.063810  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:15:39.063893  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:15:39.063951  185942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:15:39.064107  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:15:39.064153  185942 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:15:39.064167  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:15:39.064202  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:15:39.064239  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:15:39.064272  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:15:39.064335  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:39.064972  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:15:39.103261  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:15:39.145102  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:15:39.175151  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:15:39.205220  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:15:39.236045  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:15:39.273622  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:15:39.299336  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:15:39.325277  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:15:39.349878  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:15:39.374466  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:15:39.398920  185942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:15:39.416280  185942 ssh_runner.go:195] Run: openssl version
	I1028 12:15:39.422478  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:15:39.434671  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439581  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439635  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.445736  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:15:39.457128  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:15:39.468602  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473229  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473306  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.479063  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:15:39.490370  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:15:39.501843  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506514  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506579  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.512633  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:15:39.524115  185942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:15:39.528804  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:15:39.534982  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:15:39.541214  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:15:39.547734  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:15:39.554143  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:15:39.560719  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:15:39.567076  185942 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:15:39.567173  185942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:15:39.567226  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.611567  185942 cri.go:89] found id: ""
	I1028 12:15:39.611644  185942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:15:39.622561  185942 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:15:39.622583  185942 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:15:39.622637  185942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:15:39.632757  185942 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:15:39.633873  185942 kubeconfig.go:125] found "embed-certs-709250" server: "https://192.168.39.211:8443"
	I1028 12:15:39.635943  185942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:15:39.646060  185942 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1028 12:15:39.646104  185942 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:15:39.646119  185942 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:15:39.646177  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.686806  185942 cri.go:89] found id: ""
	I1028 12:15:39.686891  185942 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:15:39.703935  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:15:39.714319  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:15:39.714341  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:15:39.714389  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:15:39.725383  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:15:39.725452  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:15:39.737075  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:15:39.748226  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:15:39.748311  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:15:39.760111  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.770287  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:15:39.770365  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.780776  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:15:39.790412  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:15:39.790475  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:15:39.800727  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:15:39.811331  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:39.926791  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:41.043020  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11617977s)
	I1028 12:15:41.043082  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.246311  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.309073  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.392313  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:15:41.392425  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:41.893601  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.393518  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.444753  185942 api_server.go:72] duration metric: took 1.052438751s to wait for apiserver process to appear ...
	I1028 12:15:42.444794  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:15:42.444821  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.214786  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.214821  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.214837  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.252422  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.252458  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.445825  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.451454  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.451549  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:45.945668  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.956623  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.956667  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.445240  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.450197  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:46.450223  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.945901  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.950302  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:15:46.956218  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:15:46.956245  185942 api_server.go:131] duration metric: took 4.511443878s to wait for apiserver health ...
	I1028 12:15:46.956254  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:46.956260  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:46.958294  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:46.959738  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:15:46.970473  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:15:46.994129  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:15:47.003807  185942 system_pods.go:59] 8 kube-system pods found
	I1028 12:15:47.003843  185942 system_pods.go:61] "coredns-7c65d6cfc9-j66cd" [d53b2839-00f6-4ccc-833d-76424b3efdba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:15:47.003851  185942 system_pods.go:61] "etcd-embed-certs-709250" [24761127-dde4-4f5d-b7cf-a13e37366e0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:15:47.003858  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [17996153-32c3-41e0-be90-fc9e058e0080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:15:47.003864  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [4ce37c00-1015-45f8-b847-1ca92cdf3a31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:15:47.003871  185942 system_pods.go:61] "kube-proxy-dl7xq" [a06ed5ff-b1e9-42c7-ba26-f120bb03ccb6] Running
	I1028 12:15:47.003877  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [c76e654e-a7fc-4891-8e73-bd74f9178c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:15:47.003883  185942 system_pods.go:61] "metrics-server-6867b74b74-k69kz" [568d5308-3f66-459b-b5c8-594d9400b6c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:15:47.003886  185942 system_pods.go:61] "storage-provisioner" [6552cef1-21b6-4306-a3e2-ff16793257dc] Running
	I1028 12:15:47.003893  185942 system_pods.go:74] duration metric: took 9.734271ms to wait for pod list to return data ...
	I1028 12:15:47.003900  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:15:47.008428  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:15:47.008465  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:15:47.008479  185942 node_conditions.go:105] duration metric: took 4.573275ms to run NodePressure ...
	I1028 12:15:47.008504  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:47.285509  185942 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291045  185942 kubeadm.go:739] kubelet initialised
	I1028 12:15:47.291069  185942 kubeadm.go:740] duration metric: took 5.521713ms waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291078  185942 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:15:47.299072  185942 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:49.312365  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:50.804953  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:50.804976  185942 pod_ready.go:82] duration metric: took 3.505873868s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:50.804986  185942 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378422  186547 start.go:364] duration metric: took 2m50.816229865s to acquireMachinesLock for "default-k8s-diff-port-349222"
	I1028 12:15:53.378482  186547 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:53.378491  186547 fix.go:54] fixHost starting: 
	I1028 12:15:53.378917  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:53.378971  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:53.395967  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I1028 12:15:53.396434  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:53.396923  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:15:53.396950  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:53.397332  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:53.397552  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:15:53.397726  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:15:53.399287  186547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349222: state=Stopped err=<nil>
	I1028 12:15:53.399337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	W1028 12:15:53.399468  186547 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:53.401446  186547 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-349222" ...
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:52.838774  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.811974  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:53.811997  185942 pod_ready.go:82] duration metric: took 3.00700476s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:53.812008  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:55.824400  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.402920  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Start
	I1028 12:15:53.403172  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring networks are active...
	I1028 12:15:53.403912  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network default is active
	I1028 12:15:53.404195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network mk-default-k8s-diff-port-349222 is active
	I1028 12:15:53.404615  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Getting domain xml...
	I1028 12:15:53.405554  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Creating domain...
	I1028 12:15:54.734540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting to get IP...
	I1028 12:15:54.735417  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735784  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735880  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:54.735759  187305 retry.go:31] will retry after 268.036011ms: waiting for machine to come up
	I1028 12:15:55.005376  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.005999  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.006032  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.005930  187305 retry.go:31] will retry after 255.477665ms: waiting for machine to come up
	I1028 12:15:55.263500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264118  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264153  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.264087  187305 retry.go:31] will retry after 354.942061ms: waiting for machine to come up
	I1028 12:15:55.620877  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621664  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621698  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.621610  187305 retry.go:31] will retry after 569.620755ms: waiting for machine to come up
	I1028 12:15:56.192393  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192872  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.192803  187305 retry.go:31] will retry after 703.637263ms: waiting for machine to come up
	I1028 12:15:56.897762  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898304  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.898214  187305 retry.go:31] will retry after 713.628482ms: waiting for machine to come up
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:58.321600  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:00.018244  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.018285  185942 pod_ready.go:82] duration metric: took 6.206271068s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.018297  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028610  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.028638  185942 pod_ready.go:82] duration metric: took 10.334289ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028653  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041057  185942 pod_ready.go:93] pod "kube-proxy-dl7xq" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.041091  185942 pod_ready.go:82] duration metric: took 12.429027ms for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041106  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049617  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.049645  185942 pod_ready.go:82] duration metric: took 8.529436ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049659  185942 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:57.613338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613844  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613873  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:57.613796  187305 retry.go:31] will retry after 1.188479203s: waiting for machine to come up
	I1028 12:15:58.803300  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803690  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803724  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:58.803650  187305 retry.go:31] will retry after 1.439057212s: waiting for machine to come up
	I1028 12:16:00.244665  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245201  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245239  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:00.245141  187305 retry.go:31] will retry after 1.842038011s: waiting for machine to come up
	I1028 12:16:02.090283  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090879  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:02.090828  187305 retry.go:31] will retry after 1.556155538s: waiting for machine to come up
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.165378  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:04.555857  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:03.648337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648760  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:03.648736  187305 retry.go:31] will retry after 2.586516153s: waiting for machine to come up
	I1028 12:16:06.236934  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237402  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237433  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:06.237352  187305 retry.go:31] will retry after 3.507901898s: waiting for machine to come up
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.557581  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.056276  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.746980  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747449  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747482  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:09.747401  187305 retry.go:31] will retry after 4.499585546s: waiting for machine to come up
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.487114  185546 start.go:364] duration metric: took 56.6590668s to acquireMachinesLock for "no-preload-871884"
	I1028 12:16:15.487176  185546 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:16:15.487190  185546 fix.go:54] fixHost starting: 
	I1028 12:16:15.487650  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:16:15.487713  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:16:15.508857  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1028 12:16:15.509318  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:16:15.510000  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:16:15.510037  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:16:15.510385  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:16:15.510599  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:15.510779  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:16:15.512738  185546 fix.go:112] recreateIfNeeded on no-preload-871884: state=Stopped err=<nil>
	I1028 12:16:15.512772  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	W1028 12:16:15.512963  185546 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:16:15.514890  185546 out.go:177] * Restarting existing kvm2 VM for "no-preload-871884" ...
	I1028 12:16:11.056427  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:13.058549  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.556621  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.516551  185546 main.go:141] libmachine: (no-preload-871884) Calling .Start
	I1028 12:16:15.516786  185546 main.go:141] libmachine: (no-preload-871884) Ensuring networks are active...
	I1028 12:16:15.517934  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network default is active
	I1028 12:16:15.518543  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network mk-no-preload-871884 is active
	I1028 12:16:15.519028  185546 main.go:141] libmachine: (no-preload-871884) Getting domain xml...
	I1028 12:16:15.519878  185546 main.go:141] libmachine: (no-preload-871884) Creating domain...
	I1028 12:16:14.249128  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249645  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has current primary IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249674  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Found IP for machine: 192.168.50.75
	I1028 12:16:14.249689  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserving static IP address...
	I1028 12:16:14.250120  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserved static IP address: 192.168.50.75
	I1028 12:16:14.250139  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for SSH to be available...
	I1028 12:16:14.250164  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.250205  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | skip adding static IP to network mk-default-k8s-diff-port-349222 - found existing host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"}
	I1028 12:16:14.250222  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Getting to WaitForSSH function...
	I1028 12:16:14.252540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.252883  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.252926  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.253035  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH client type: external
	I1028 12:16:14.253075  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa (-rw-------)
	I1028 12:16:14.253100  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:14.253115  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | About to run SSH command:
	I1028 12:16:14.253129  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | exit 0
	I1028 12:16:14.373688  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:14.374101  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetConfigRaw
	I1028 12:16:14.374713  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.377338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.377824  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.377857  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.378094  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:16:14.378326  186547 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:14.378345  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:14.378556  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.380694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.380976  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.380992  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.381143  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.381356  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381521  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381678  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.381882  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.382107  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.382119  186547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:14.490030  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:14.490061  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490303  186547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-349222"
	I1028 12:16:14.490331  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490523  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.492989  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493395  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.493426  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493626  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.493794  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.493960  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.494104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.494258  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.494427  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.494439  186547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-349222 && echo "default-k8s-diff-port-349222" | sudo tee /etc/hostname
	I1028 12:16:14.604373  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-349222
	
	I1028 12:16:14.604405  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.607135  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607437  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.607465  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.607891  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608060  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608187  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.608353  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.608549  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.608569  186547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-349222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-349222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-349222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:14.714933  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:14.714963  186547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:14.714990  186547 buildroot.go:174] setting up certificates
	I1028 12:16:14.714998  186547 provision.go:84] configureAuth start
	I1028 12:16:14.715007  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.715321  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.718051  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.718406  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718504  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.720638  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.720945  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.720972  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.721127  186547 provision.go:143] copyHostCerts
	I1028 12:16:14.721198  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:14.721213  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:14.721283  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:14.721407  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:14.721417  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:14.721446  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:14.721522  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:14.721544  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:14.721571  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:14.721634  186547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-349222 san=[127.0.0.1 192.168.50.75 default-k8s-diff-port-349222 localhost minikube]
	I1028 12:16:14.854227  186547 provision.go:177] copyRemoteCerts
	I1028 12:16:14.854285  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:14.854314  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.857250  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857590  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.857620  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857897  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.858091  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.858286  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.858434  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:14.940752  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:14.967575  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 12:16:14.992693  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:16:15.017801  186547 provision.go:87] duration metric: took 302.790563ms to configureAuth
	I1028 12:16:15.017831  186547 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:15.018073  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:15.018168  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.021181  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.021574  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021719  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.021894  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022113  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022317  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.022564  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.022744  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.022761  186547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:15.257308  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:15.257339  186547 machine.go:96] duration metric: took 878.998573ms to provisionDockerMachine
	I1028 12:16:15.257350  186547 start.go:293] postStartSetup for "default-k8s-diff-port-349222" (driver="kvm2")
	I1028 12:16:15.257360  186547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:15.257378  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.257695  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:15.257721  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.260380  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260767  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.260795  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260990  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.261186  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.261370  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.261513  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.341376  186547 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:15.345736  186547 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:15.345760  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:15.345820  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:15.345891  186547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:15.345978  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:15.355662  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:15.381750  186547 start.go:296] duration metric: took 124.385481ms for postStartSetup
	I1028 12:16:15.381788  186547 fix.go:56] duration metric: took 22.00329785s for fixHost
	I1028 12:16:15.381807  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.384756  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385099  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.385130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385359  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.385587  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385782  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385974  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.386165  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.386345  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.386355  186547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:15.486905  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117775.445749296
	
	I1028 12:16:15.486934  186547 fix.go:216] guest clock: 1730117775.445749296
	I1028 12:16:15.486944  186547 fix.go:229] Guest: 2024-10-28 12:16:15.445749296 +0000 UTC Remote: 2024-10-28 12:16:15.381791731 +0000 UTC m=+192.967058764 (delta=63.957565ms)
	I1028 12:16:15.487005  186547 fix.go:200] guest clock delta is within tolerance: 63.957565ms
	I1028 12:16:15.487018  186547 start.go:83] releasing machines lock for "default-k8s-diff-port-349222", held for 22.108560462s
	I1028 12:16:15.487082  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.487382  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:15.490840  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491343  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.491374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491528  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492208  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492431  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492581  186547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:15.492657  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.492706  186547 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:15.492746  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.496062  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496119  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496544  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496901  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497225  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497257  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497458  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497583  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497665  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.497798  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497977  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.590741  186547 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:15.615347  186547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:15.762979  186547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:15.770132  186547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:15.770221  186547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:15.788651  186547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:15.788684  186547 start.go:495] detecting cgroup driver to use...
	I1028 12:16:15.788751  186547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:15.806118  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:15.820916  186547 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:15.820986  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:15.835770  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:15.850994  186547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:15.979465  186547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:16.160837  186547 docker.go:233] disabling docker service ...
	I1028 12:16:16.160924  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:16.177934  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:16.194616  186547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:16.320605  186547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:16.464175  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:16.479626  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:16.502747  186547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:16.502889  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.514636  186547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:16.514695  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.528137  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.539961  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.552263  186547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:16.566275  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.578632  186547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.599084  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.611250  186547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:16.621976  186547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:16.622052  186547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:16.640800  186547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:16.651767  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:16.806628  186547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:16.903584  186547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:16.903655  186547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:16.909873  186547 start.go:563] Will wait 60s for crictl version
	I1028 12:16:16.909950  186547 ssh_runner.go:195] Run: which crictl
	I1028 12:16:16.915388  186547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:16.964424  186547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:16.964517  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:16.997415  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:17.032323  186547 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:17.033747  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:17.036500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.036903  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:17.036935  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.037118  186547 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:17.041698  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:17.056649  186547 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:17.056792  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:17.056840  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:17.099143  186547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:17.099233  186547 ssh_runner.go:195] Run: which lz4
	I1028 12:16:17.103882  186547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:16:17.108660  186547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:16:17.108699  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.559178  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:19.560739  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:16.842086  185546 main.go:141] libmachine: (no-preload-871884) Waiting to get IP...
	I1028 12:16:16.843056  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:16.843514  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:16.843599  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:16.843484  187500 retry.go:31] will retry after 240.188984ms: waiting for machine to come up
	I1028 12:16:17.085193  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.085702  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.085739  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.085649  187500 retry.go:31] will retry after 361.44193ms: waiting for machine to come up
	I1028 12:16:17.448961  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.449619  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.449645  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.449576  187500 retry.go:31] will retry after 386.179326ms: waiting for machine to come up
	I1028 12:16:17.837097  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.837879  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.837907  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.837834  187500 retry.go:31] will retry after 531.12665ms: waiting for machine to come up
	I1028 12:16:18.370266  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:18.370803  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:18.370834  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:18.370746  187500 retry.go:31] will retry after 760.20134ms: waiting for machine to come up
	I1028 12:16:19.132853  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.133415  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.133444  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.133360  187500 retry.go:31] will retry after 817.773678ms: waiting for machine to come up
	I1028 12:16:19.952317  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.952800  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.952824  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.952760  187500 retry.go:31] will retry after 861.798605ms: waiting for machine to come up
	I1028 12:16:20.816156  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:20.816794  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:20.816826  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:20.816750  187500 retry.go:31] will retry after 908.062214ms: waiting for machine to come up
	I1028 12:16:18.686980  186547 crio.go:462] duration metric: took 1.583134893s to copy over tarball
	I1028 12:16:18.687053  186547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:16:21.016264  186547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329174428s)
	I1028 12:16:21.016309  186547 crio.go:469] duration metric: took 2.329300291s to extract the tarball
	I1028 12:16:21.016322  186547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:16:21.053950  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:21.112876  186547 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:16:21.112903  186547 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:16:21.112914  186547 kubeadm.go:934] updating node { 192.168.50.75 8444 v1.31.2 crio true true} ...
	I1028 12:16:21.113037  186547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-349222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:21.113119  186547 ssh_runner.go:195] Run: crio config
	I1028 12:16:21.179853  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:21.179877  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:21.179888  186547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:21.179907  186547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-349222 NodeName:default-k8s-diff-port-349222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:21.180039  186547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-349222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.75"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:21.180117  186547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:21.191650  186547 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:21.191721  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:21.201670  186547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 12:16:21.220426  186547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:21.240774  186547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:16:21.263336  186547 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:21.267818  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:21.281577  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:21.441517  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:21.464117  186547 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222 for IP: 192.168.50.75
	I1028 12:16:21.464145  186547 certs.go:194] generating shared ca certs ...
	I1028 12:16:21.464167  186547 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:21.464392  186547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:21.464458  186547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:21.464485  186547 certs.go:256] generating profile certs ...
	I1028 12:16:21.464599  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.key
	I1028 12:16:21.464691  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key.e54e33e0
	I1028 12:16:21.464749  186547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key
	I1028 12:16:21.464919  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:21.464967  186547 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:21.464981  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:21.465006  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:21.465031  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:21.465069  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:21.465124  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:21.465976  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:21.511145  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:21.572071  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:21.613442  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:21.655508  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:16:21.687378  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:16:21.713227  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:21.738909  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:21.765274  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:21.792427  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:21.817632  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:21.842996  186547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:21.861059  186547 ssh_runner.go:195] Run: openssl version
	I1028 12:16:21.867814  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:21.880769  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886245  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886325  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.893179  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:21.908974  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:21.926992  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932350  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932428  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.939073  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:21.952302  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:21.965485  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971486  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971564  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.978531  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:21.995399  186547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:22.001453  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:22.009449  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:22.016898  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:22.024410  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:22.033151  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:22.040981  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:22.048298  186547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:22.048441  186547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:22.048531  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.095210  186547 cri.go:89] found id: ""
	I1028 12:16:22.095319  186547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:22.111740  186547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:22.111772  186547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:22.111828  186547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:22.122472  186547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:22.123648  186547 kubeconfig.go:125] found "default-k8s-diff-port-349222" server: "https://192.168.50.75:8444"
	I1028 12:16:22.126117  186547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:22.137057  186547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.75
	I1028 12:16:22.137096  186547 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:22.137108  186547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:22.137179  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.180526  186547 cri.go:89] found id: ""
	I1028 12:16:22.180638  186547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:22.197697  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:22.208176  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:22.208197  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:22.208246  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:16:22.218379  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:22.218438  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:22.228844  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:16:22.239330  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:22.239407  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:22.250200  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.260309  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:22.260374  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.271041  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:16:22.281556  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:22.281637  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:22.294003  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:22.305123  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:22.426791  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.058068  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:24.059822  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:21.726767  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:21.727332  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:21.727373  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:21.727224  187500 retry.go:31] will retry after 1.684184533s: waiting for machine to come up
	I1028 12:16:23.412691  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:23.413228  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:23.413254  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:23.413177  187500 retry.go:31] will retry after 1.416062445s: waiting for machine to come up
	I1028 12:16:24.830846  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:24.831450  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:24.831480  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:24.831393  187500 retry.go:31] will retry after 2.716897952s: waiting for machine to come up
	I1028 12:16:23.288371  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.506229  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.575063  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.644776  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:23.644896  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.145579  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.645050  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.666456  186547 api_server.go:72] duration metric: took 1.021679294s to wait for apiserver process to appear ...
	I1028 12:16:24.666493  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:24.666518  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:24.667086  186547 api_server.go:269] stopped: https://192.168.50.75:8444/healthz: Get "https://192.168.50.75:8444/healthz": dial tcp 192.168.50.75:8444: connect: connection refused
	I1028 12:16:25.166765  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.336957  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.337000  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.337015  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.382075  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.382110  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.667083  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.671910  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:28.671935  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.167591  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.173364  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:29.173397  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.666902  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.672205  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:16:29.679964  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:16:29.680002  186547 api_server.go:131] duration metric: took 5.013500479s to wait for apiserver health ...
	I1028 12:16:29.680014  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:29.680032  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:29.681992  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:26.558629  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.560116  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:27.550893  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:27.551454  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:27.551476  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:27.551438  187500 retry.go:31] will retry after 2.986712877s: waiting for machine to come up
	I1028 12:16:30.539999  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:30.540601  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:30.540632  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:30.540526  187500 retry.go:31] will retry after 3.947007446s: waiting for machine to come up
	I1028 12:16:29.683325  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:16:29.697362  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:16:29.717296  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:16:29.726327  186547 system_pods.go:59] 8 kube-system pods found
	I1028 12:16:29.726363  186547 system_pods.go:61] "coredns-7c65d6cfc9-k5h7n" [e203fcce-1a8a-431b-a816-d75b33ca9417] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:16:29.726374  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [2214daee-0302-44cd-9297-836eeb011232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:16:29.726391  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [c4331c24-07e2-4b50-ab04-31bcd00960e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:16:29.726402  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [9dddd9fb-ad03-4771-af1b-d9e1e024af52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:16:29.726413  186547 system_pods.go:61] "kube-proxy-bqq65" [ed5d0c14-0ddb-4446-a2f7-ae457d629fb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:16:29.726423  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [9cfcc366-038f-43a9-b919-48742fa419af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:16:29.726434  186547 system_pods.go:61] "metrics-server-6867b74b74-cgkz9" [3d919412-efb8-4030-a5d0-3c325c824c48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:16:29.726445  186547 system_pods.go:61] "storage-provisioner" [613b003c-1eee-4294-947f-ea7a21edc8d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:16:29.726464  186547 system_pods.go:74] duration metric: took 9.135782ms to wait for pod list to return data ...
	I1028 12:16:29.726478  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:16:29.729971  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:16:29.729996  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:16:29.730009  186547 node_conditions.go:105] duration metric: took 3.525858ms to run NodePressure ...
	I1028 12:16:29.730035  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:30.043775  186547 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048614  186547 kubeadm.go:739] kubelet initialised
	I1028 12:16:30.048638  186547 kubeadm.go:740] duration metric: took 4.83853ms waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048647  186547 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:16:30.053908  186547 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:32.063283  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.057577  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:33.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:35.557338  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:34.491658  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492175  185546 main.go:141] libmachine: (no-preload-871884) Found IP for machine: 192.168.72.156
	I1028 12:16:34.492197  185546 main.go:141] libmachine: (no-preload-871884) Reserving static IP address...
	I1028 12:16:34.492215  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has current primary IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492674  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.492704  185546 main.go:141] libmachine: (no-preload-871884) Reserved static IP address: 192.168.72.156
	I1028 12:16:34.492739  185546 main.go:141] libmachine: (no-preload-871884) DBG | skip adding static IP to network mk-no-preload-871884 - found existing host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"}
	I1028 12:16:34.492763  185546 main.go:141] libmachine: (no-preload-871884) DBG | Getting to WaitForSSH function...
	I1028 12:16:34.492777  185546 main.go:141] libmachine: (no-preload-871884) Waiting for SSH to be available...
	I1028 12:16:34.495176  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495502  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.495536  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495682  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH client type: external
	I1028 12:16:34.495714  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa (-rw-------)
	I1028 12:16:34.495747  185546 main.go:141] libmachine: (no-preload-871884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:34.495770  185546 main.go:141] libmachine: (no-preload-871884) DBG | About to run SSH command:
	I1028 12:16:34.495796  185546 main.go:141] libmachine: (no-preload-871884) DBG | exit 0
	I1028 12:16:34.625650  185546 main.go:141] libmachine: (no-preload-871884) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:34.625959  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetConfigRaw
	I1028 12:16:34.626602  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.629137  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629442  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.629477  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629733  185546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:16:34.629938  185546 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:34.629957  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:34.630153  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.632415  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.632777  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.632804  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.633033  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.633247  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633422  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633592  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.633762  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.633954  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.633968  185546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:34.738368  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:34.738406  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738696  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:16:34.738729  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738926  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.741750  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742216  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.742322  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742339  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.742538  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742689  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742857  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.743032  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.743248  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.743266  185546 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-871884 && echo "no-preload-871884" | sudo tee /etc/hostname
	I1028 12:16:34.863767  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-871884
	
	I1028 12:16:34.863802  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.867136  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867530  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.867561  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867822  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.868039  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868251  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868430  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.868634  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.868880  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.868905  185546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-871884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-871884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-871884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:34.989420  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:34.989450  185546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:34.989468  185546 buildroot.go:174] setting up certificates
	I1028 12:16:34.989476  185546 provision.go:84] configureAuth start
	I1028 12:16:34.989485  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.989790  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.992627  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.992977  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.993007  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.993225  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.995586  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.995888  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.995911  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.996122  185546 provision.go:143] copyHostCerts
	I1028 12:16:34.996190  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:34.996204  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:34.996261  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:34.996375  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:34.996384  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:34.996408  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:34.996472  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:34.996482  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:34.996499  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:34.996559  185546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.no-preload-871884 san=[127.0.0.1 192.168.72.156 localhost minikube no-preload-871884]
	I1028 12:16:35.437900  185546 provision.go:177] copyRemoteCerts
	I1028 12:16:35.437961  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:35.437985  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.440936  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441329  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.441361  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441555  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.441756  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.441921  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.442085  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.524911  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:35.554631  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:16:35.586946  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:16:35.620121  185546 provision.go:87] duration metric: took 630.630531ms to configureAuth
	I1028 12:16:35.620155  185546 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:35.620395  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:35.620502  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.623316  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623607  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.623643  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623886  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.624099  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624290  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624433  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.624612  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:35.624794  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:35.624810  185546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:35.886145  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:35.886178  185546 machine.go:96] duration metric: took 1.256224912s to provisionDockerMachine
	I1028 12:16:35.886196  185546 start.go:293] postStartSetup for "no-preload-871884" (driver="kvm2")
	I1028 12:16:35.886209  185546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:35.886232  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:35.886615  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:35.886653  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.889615  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890016  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.890048  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.890459  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.890654  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.890798  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.977889  185546 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:35.983360  185546 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:35.983387  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:35.983454  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:35.983543  185546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:35.983674  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:35.997400  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:36.025665  185546 start.go:296] duration metric: took 139.454088ms for postStartSetup
	I1028 12:16:36.025714  185546 fix.go:56] duration metric: took 20.538525254s for fixHost
	I1028 12:16:36.025739  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.028490  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.028933  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.028964  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.029170  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.029386  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029573  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029734  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.029909  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:36.030087  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:36.030098  185546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:36.138559  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117796.101397993
	
	I1028 12:16:36.138589  185546 fix.go:216] guest clock: 1730117796.101397993
	I1028 12:16:36.138599  185546 fix.go:229] Guest: 2024-10-28 12:16:36.101397993 +0000 UTC Remote: 2024-10-28 12:16:36.025719388 +0000 UTC m=+359.787107454 (delta=75.678605ms)
	I1028 12:16:36.138633  185546 fix.go:200] guest clock delta is within tolerance: 75.678605ms
	I1028 12:16:36.138638  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 20.651488254s
	I1028 12:16:36.138663  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.138953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:36.141711  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142144  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.142180  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142323  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.142975  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143165  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143240  185546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:36.143306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.143378  185546 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:36.143399  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.145980  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146166  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146348  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146375  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146507  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146617  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146657  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146701  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.146795  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146882  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.146953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.147013  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.147071  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.147202  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.223364  185546 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:36.246964  185546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:34.561016  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.564296  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.396734  185546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:36.403214  185546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:36.403298  185546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:36.421658  185546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:36.421695  185546 start.go:495] detecting cgroup driver to use...
	I1028 12:16:36.421772  185546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:36.441133  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:36.456750  185546 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:36.456806  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:36.473457  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:36.489210  185546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:36.621054  185546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:36.767341  185546 docker.go:233] disabling docker service ...
	I1028 12:16:36.767432  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:36.784655  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:36.799522  185546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:36.942312  185546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:37.066636  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:37.082284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:37.102462  185546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:37.102530  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.113687  185546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:37.113760  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.125624  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.137036  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.148417  185546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:37.160015  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.171382  185546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.192342  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.204353  185546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:37.215188  185546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:37.215275  185546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:37.230653  185546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:37.241484  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:37.382996  185546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:37.479263  185546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:37.479363  185546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:37.485265  185546 start.go:563] Will wait 60s for crictl version
	I1028 12:16:37.485330  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:37.489545  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:37.536126  185546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:37.536212  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.567538  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.600370  185546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.559329  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:40.057624  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:37.601686  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:37.604235  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604568  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:37.604601  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604782  185546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:37.609354  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:37.624966  185546 kubeadm.go:883] updating cluster {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:37.625081  185546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:37.625117  185546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:37.664112  185546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:37.664149  185546 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:16:37.664262  185546 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.664306  185546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.664334  185546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 12:16:37.664311  185546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.664352  185546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.664393  185546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.664434  185546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.664399  185546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666080  185546 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.666083  185546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.666081  185546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.666142  185546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.666085  185546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 12:16:37.666079  185546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.666185  185546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666398  185546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.840639  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.857089  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.859107  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 12:16:37.859358  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.863640  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.867925  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.876221  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.921581  185546 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 12:16:37.921638  185546 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.921689  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.042970  185546 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 12:16:38.043015  185546 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.043068  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093917  185546 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 12:16:38.093954  185546 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 12:16:38.093973  185546 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.093985  185546 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.094029  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094038  185546 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 12:16:38.094057  185546 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.094087  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.094094  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094030  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093976  185546 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 12:16:38.094143  185546 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.094152  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.094175  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.110134  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.110302  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.188922  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.188979  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.193920  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.193929  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.292698  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.325562  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.331855  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.332873  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.345880  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.345951  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.414842  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.470776  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.470949  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 12:16:38.471044  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.481197  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 12:16:38.481333  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:38.503147  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 12:16:38.503171  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:38.532884  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 12:16:38.533001  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:38.552405  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 12:16:38.552417  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 12:16:38.552472  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552485  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 12:16:38.552523  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:38.552529  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552552  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 12:16:38.552527  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 12:16:38.552598  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 12:16:38.829851  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127678  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.575124569s)
	I1028 12:16:41.127722  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 12:16:41.127744  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.575188461s)
	I1028 12:16:41.127775  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 12:16:41.127785  185546 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.297902587s)
	I1028 12:16:41.127803  185546 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127818  185546 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 12:16:41.127850  185546 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127858  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127895  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:39.064564  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:41.563643  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.058025  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:44.557594  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.190694  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062807881s)
	I1028 12:16:43.190736  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 12:16:43.190752  185546 ssh_runner.go:235] Completed: which crictl: (2.062836368s)
	I1028 12:16:43.190773  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:43.190827  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:43.190831  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:45.281583  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.090685426s)
	I1028 12:16:45.281620  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 12:16:45.281650  185546 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281679  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.090821035s)
	I1028 12:16:45.281698  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281750  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:45.325500  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:42.565395  186547 pod_ready.go:93] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.565425  186547 pod_ready.go:82] duration metric: took 12.511487215s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.565438  186547 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572364  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.572388  186547 pod_ready.go:82] duration metric: took 6.941356ms for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572402  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579074  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.579099  186547 pod_ready.go:82] duration metric: took 6.689137ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579116  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584088  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.584108  186547 pod_ready.go:82] duration metric: took 4.985095ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584118  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588810  186547 pod_ready.go:93] pod "kube-proxy-bqq65" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.588837  186547 pod_ready.go:82] duration metric: took 4.711896ms for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588849  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758349  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:43.758376  186547 pod_ready.go:82] duration metric: took 1.169519383s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758387  186547 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:45.766209  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.059150  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.556589  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.174287  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.84875195s)
	I1028 12:16:49.174340  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 12:16:49.174291  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.892568087s)
	I1028 12:16:49.174422  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 12:16:49.174427  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:49.174466  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:49.174524  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:48.265641  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:50.271513  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.557320  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.557540  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:51.438821  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.26426785s)
	I1028 12:16:51.438857  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 12:16:51.438890  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264449757s)
	I1028 12:16:51.438893  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:51.438911  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 12:16:51.438945  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:52.890902  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451935078s)
	I1028 12:16:52.890933  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 12:16:52.890960  185546 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:52.891010  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:53.643145  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 12:16:53.643208  185546 cache_images.go:123] Successfully loaded all cached images
	I1028 12:16:53.643216  185546 cache_images.go:92] duration metric: took 15.979050279s to LoadCachedImages
	I1028 12:16:53.643231  185546 kubeadm.go:934] updating node { 192.168.72.156 8443 v1.31.2 crio true true} ...
	I1028 12:16:53.643393  185546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-871884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:53.643480  185546 ssh_runner.go:195] Run: crio config
	I1028 12:16:53.701778  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:16:53.701805  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:53.701814  185546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:53.701836  185546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.156 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-871884 NodeName:no-preload-871884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:53.701952  185546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-871884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:53.702019  185546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:53.714245  185546 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:53.714327  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:53.725610  185546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 12:16:53.745071  185546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:53.766897  185546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:16:53.787043  185546 ssh_runner.go:195] Run: grep 192.168.72.156	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:53.791580  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:53.805088  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:53.945235  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:53.964073  185546 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884 for IP: 192.168.72.156
	I1028 12:16:53.964099  185546 certs.go:194] generating shared ca certs ...
	I1028 12:16:53.964115  185546 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:53.964290  185546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:53.964338  185546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:53.964355  185546 certs.go:256] generating profile certs ...
	I1028 12:16:53.964458  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.key
	I1028 12:16:53.964533  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key.6934b48e
	I1028 12:16:53.964584  185546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key
	I1028 12:16:53.964719  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:53.964750  185546 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:53.964765  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:53.964801  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:53.964831  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:53.964866  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:53.964921  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:53.965632  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:54.004592  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:54.044270  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:54.079496  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:54.114473  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:16:54.141836  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:54.175201  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:54.202282  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:54.227874  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:54.254818  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:54.282950  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:54.310204  185546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:54.328834  185546 ssh_runner.go:195] Run: openssl version
	I1028 12:16:54.335391  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:54.347474  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352687  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352755  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.358834  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:54.373155  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:54.387035  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392179  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392281  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.398488  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:54.412352  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:54.426361  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431415  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431470  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.437583  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:54.450708  185546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:54.456625  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:54.463458  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:54.469939  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:54.477873  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:54.484962  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:54.491679  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:54.498106  185546 kubeadm.go:392] StartCluster: {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:54.498211  185546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:54.498287  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.543142  185546 cri.go:89] found id: ""
	I1028 12:16:54.543250  185546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:54.555948  185546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:54.555971  185546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:54.556021  185546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:54.566954  185546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:54.567990  185546 kubeconfig.go:125] found "no-preload-871884" server: "https://192.168.72.156:8443"
	I1028 12:16:54.570149  185546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:54.581005  185546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.156
	I1028 12:16:54.581039  185546 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:54.581051  185546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:54.581100  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.622676  185546 cri.go:89] found id: ""
	I1028 12:16:54.622742  185546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:54.642427  185546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:54.655104  185546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:54.655131  185546 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:54.655199  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:54.665367  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:54.665432  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:54.675664  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:54.685921  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:54.685997  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:54.698451  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.709982  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:54.710060  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.721243  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:54.731699  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:54.731780  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:54.743365  185546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:54.754284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:54.868055  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.645470  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.858805  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.940632  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:56.020654  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:56.020735  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.764963  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:54.766822  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.768500  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.058614  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.556085  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:00.556460  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.521589  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.021710  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.066266  185546 api_server.go:72] duration metric: took 1.045608096s to wait for apiserver process to appear ...
	I1028 12:16:57.066305  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:57.066326  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:16:57.066862  185546 api_server.go:269] stopped: https://192.168.72.156:8443/healthz: Get "https://192.168.72.156:8443/healthz": dial tcp 192.168.72.156:8443: connect: connection refused
	I1028 12:16:57.567124  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.159147  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.159179  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.159193  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.171505  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.171530  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.566560  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.570920  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:00.570947  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.066537  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.071173  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.071205  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.566517  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.577822  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.577851  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:02.066514  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:02.071117  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:17:02.078265  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:17:02.078293  185546 api_server.go:131] duration metric: took 5.011981306s to wait for apiserver health ...
	I1028 12:17:02.078302  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:17:02.078308  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:17:02.080348  185546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:59.267565  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:01.766399  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.081626  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:17:02.103809  185546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:17:02.135225  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:17:02.152051  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:17:02.152102  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:17:02.152113  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:17:02.152125  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:17:02.152133  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:17:02.152146  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:17:02.152159  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:17:02.152167  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:17:02.152174  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:17:02.152183  185546 system_pods.go:74] duration metric: took 16.930389ms to wait for pod list to return data ...
	I1028 12:17:02.152192  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:17:02.157475  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:17:02.157504  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:17:02.157515  185546 node_conditions.go:105] duration metric: took 5.317861ms to run NodePressure ...
	I1028 12:17:02.157548  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:17:02.476553  185546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482764  185546 kubeadm.go:739] kubelet initialised
	I1028 12:17:02.482789  185546 kubeadm.go:740] duration metric: took 6.205425ms waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482798  185546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:02.487480  185546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.495454  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495482  185546 pod_ready.go:82] duration metric: took 7.976331ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.495495  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495505  185546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.499904  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499931  185546 pod_ready.go:82] duration metric: took 4.41555ms for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.499941  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499948  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.504272  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504300  185546 pod_ready.go:82] duration metric: took 4.345522ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.504325  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504337  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.538786  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538826  185546 pod_ready.go:82] duration metric: took 34.474629ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.538841  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538851  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.939462  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939490  185546 pod_ready.go:82] duration metric: took 400.627739ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.939502  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939511  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.339338  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339369  185546 pod_ready.go:82] duration metric: took 399.848996ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.339384  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339394  185546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.739585  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739640  185546 pod_ready.go:82] duration metric: took 400.235271ms for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.739655  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739665  185546 pod_ready.go:39] duration metric: took 1.256859696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:03.739682  185546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:17:03.755064  185546 ops.go:34] apiserver oom_adj: -16
	I1028 12:17:03.755086  185546 kubeadm.go:597] duration metric: took 9.199108841s to restartPrimaryControlPlane
	I1028 12:17:03.755096  185546 kubeadm.go:394] duration metric: took 9.256999682s to StartCluster
	I1028 12:17:03.755111  185546 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.755175  185546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:17:03.757048  185546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.757327  185546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:17:03.757425  185546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:17:03.757535  185546 addons.go:69] Setting storage-provisioner=true in profile "no-preload-871884"
	I1028 12:17:03.757563  185546 addons.go:234] Setting addon storage-provisioner=true in "no-preload-871884"
	I1028 12:17:03.757565  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:17:03.757589  185546 addons.go:69] Setting metrics-server=true in profile "no-preload-871884"
	I1028 12:17:03.757617  185546 addons.go:234] Setting addon metrics-server=true in "no-preload-871884"
	I1028 12:17:03.757568  185546 addons.go:69] Setting default-storageclass=true in profile "no-preload-871884"
	W1028 12:17:03.757626  185546 addons.go:243] addon metrics-server should already be in state true
	I1028 12:17:03.757635  185546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-871884"
	W1028 12:17:03.757573  185546 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:17:03.757669  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.757713  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.758051  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758093  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758196  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758233  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758231  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758355  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.759378  185546 out.go:177] * Verifying Kubernetes components...
	I1028 12:17:03.761108  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:17:03.786180  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1028 12:17:03.786344  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1028 12:17:03.787005  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787096  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.787658  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.788034  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.789126  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.789149  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.789333  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.789366  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.790199  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.790591  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.793866  185546 addons.go:234] Setting addon default-storageclass=true in "no-preload-871884"
	W1028 12:17:03.793890  185546 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:17:03.793920  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.794332  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.794384  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.806461  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I1028 12:17:03.806960  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.807572  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1028 12:17:03.807644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.807835  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808074  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.808188  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.808349  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.808603  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.808624  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808993  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.809610  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.809665  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.810531  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.812676  185546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:17:03.813307  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1028 12:17:03.813821  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.814228  185546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:03.814248  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:17:03.814266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.814350  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.814373  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.814848  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.815284  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.815323  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.817336  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817751  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.817776  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817889  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.818079  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.818219  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.818357  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.830425  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1028 12:17:03.830940  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.831486  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.831507  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.831905  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.832125  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.834275  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.835260  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1028 12:17:03.835687  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.836180  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.836200  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.836527  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.836604  185546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:17:03.836741  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.838273  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:17:03.838290  185546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:17:03.838306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.838508  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.839044  185546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:03.839060  185546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:17:03.839080  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.842836  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843272  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.843291  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843461  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.843598  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.843767  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.843774  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843909  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.844312  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.844330  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.845228  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.845354  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.845474  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.845623  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.981979  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:17:04.003932  185546 node_ready.go:35] waiting up to 6m0s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:04.071389  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:04.169654  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:04.186781  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:17:04.186808  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:17:04.252889  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:17:04.252921  185546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:17:04.315140  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.315166  185546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:17:04.395995  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.489084  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489122  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489426  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.489445  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489470  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.489481  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489490  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489763  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489781  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.497272  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.497297  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.497647  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.497677  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.497702  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185405  185546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015712456s)
	I1028 12:17:05.185458  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185469  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.185749  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.185768  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185778  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185786  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.186142  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.186160  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.186149  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.294924  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.294953  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295282  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295301  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295319  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295329  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.295339  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295584  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295615  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295622  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295641  185546 addons.go:475] Verifying addon metrics-server=true in "no-preload-871884"
	I1028 12:17:05.297689  185546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 12:17:02.557465  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:04.557517  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:05.298945  185546 addons.go:510] duration metric: took 1.541528913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 12:17:06.008731  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.766439  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:06.267839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:06.558978  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:09.058557  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:08.507261  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:10.508790  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:11.007666  185546 node_ready.go:49] node "no-preload-871884" has status "Ready":"True"
	I1028 12:17:11.007698  185546 node_ready.go:38] duration metric: took 7.003728813s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:11.007710  185546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:11.014677  185546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020020  185546 pod_ready.go:93] pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:11.020042  185546 pod_ready.go:82] duration metric: took 5.339994ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020053  185546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:08.765053  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.766104  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:11.556955  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.560411  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.027402  185546 pod_ready.go:103] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:14.026501  185546 pod_ready.go:93] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.026537  185546 pod_ready.go:82] duration metric: took 3.006475793s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.026552  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036355  185546 pod_ready.go:93] pod "kube-apiserver-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.036379  185546 pod_ready.go:82] duration metric: took 9.819102ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036391  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042711  185546 pod_ready.go:93] pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.042734  185546 pod_ready.go:82] duration metric: took 6.336523ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042745  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047387  185546 pod_ready.go:93] pod "kube-proxy-6rc4l" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.047409  185546 pod_ready.go:82] duration metric: took 4.657388ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047422  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208217  185546 pod_ready.go:93] pod "kube-scheduler-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.208243  185546 pod_ready.go:82] duration metric: took 160.813834ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208254  185546 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:16.214834  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.268493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:15.271377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.056296  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.557898  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.215767  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:20.219613  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:17.764493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.766909  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:21.769560  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:17:21.057114  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:23.058470  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:25.556210  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:22.714692  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.717301  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.269572  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:26.765467  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:28.056563  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:30.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:27.215356  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.216505  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.267239  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.765267  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:33.055776  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.056525  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.714959  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:33.715292  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.715824  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:34.267155  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:36.765199  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:37.557428  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.057039  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:37.715996  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.213916  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.765841  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:41.267477  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:42.556504  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.557351  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:42.216159  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.216286  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:43.764776  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.265654  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:46.565643  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.057078  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.716667  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.216308  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:48.267160  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.764737  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:51.058399  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.556450  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:55.557043  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:51.715736  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.215689  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:52.764839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.766020  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:57.269346  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:57.557519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.057963  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:56.715168  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:58.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.215445  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:59.271418  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.766158  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:02.556743  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.055348  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.217504  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.715462  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.768899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:06.266111  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:07.055842  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.057870  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:07.716593  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.215089  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:08.266990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.765441  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:11.556422  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.056678  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.216340  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.715086  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.766493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.766870  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.264729  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:16.057216  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.555816  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:20.556292  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.215803  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.215927  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.265178  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.268144  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:22.556987  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.057115  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.715576  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.715814  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.716043  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.767435  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:26.269805  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:27.556197  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.057036  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.216535  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.715902  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.765730  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:31.267947  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:32.556500  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.557446  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.214627  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:35.214782  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.772856  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:36.265722  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:36.559507  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.056993  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:37.215498  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.715541  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:38.266461  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.266611  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:42.268638  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:41.555847  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.558407  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:41.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.716370  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:46.214690  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:44.765752  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:47.266258  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.056747  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.058377  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.557006  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.715456  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.716120  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.267222  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:51.765814  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:18:52.557045  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.057031  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:53.215817  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.714784  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:54.268377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:56.764471  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:57.556098  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.057165  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:57.716269  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.214149  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:58.765585  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:01.265119  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.556763  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.557107  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:02.216779  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.714940  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:03.267189  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.268332  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:07.056718  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:09.056994  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:06.715788  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.716850  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:11.215701  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:07.765389  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:10.268458  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:11.559182  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.057546  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:13.216963  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:15.715550  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:12.764850  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.766597  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.265687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:16.057835  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:18.556414  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.716374  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.216629  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:19.266849  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:21.267040  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:21.056990  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.057233  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:25.555717  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:22.715323  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:24.715818  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.765935  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:26.264839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:27.556370  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.057786  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:27.216093  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:29.715861  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:28.265912  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.266993  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:32.267566  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:32.555888  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.556782  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:31.716178  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.213813  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.213999  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.764307  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.766217  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:37.056333  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.559461  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:38.214406  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:40.214795  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.264988  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:41.267329  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.056568  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:44.057256  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:42.715261  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.215472  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:43.765595  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.766087  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:46.058519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.556533  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:47.215644  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:49.715563  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.266115  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:50.268535  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:52.271227  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.056089  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:53.056973  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:55.057853  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:51.716787  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.214943  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.765208  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:57.267687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:19:57.556056  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.557091  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:00.050404  185942 pod_ready.go:82] duration metric: took 4m0.000726571s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:00.050457  185942 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:00.050479  185942 pod_ready.go:39] duration metric: took 4m12.759391454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:00.050506  185942 kubeadm.go:597] duration metric: took 4m20.427916933s to restartPrimaryControlPlane
	W1028 12:20:00.050569  185942 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:00.050616  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:19:56.715048  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.215821  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.769397  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:02.265702  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:01.715264  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.216628  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.765386  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:06.765802  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:06.715439  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.214561  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.216343  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.265899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.764867  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:13.716362  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.214893  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:14.264333  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.765340  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:18.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:20.715790  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:19.270934  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:21.764931  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:22.715880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:25.216499  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:23.766240  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.271567  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.353961  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.303321788s)
	I1028 12:20:26.354038  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:26.373066  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:26.386209  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:26.398568  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:26.398591  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:26.398634  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:26.410916  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:26.410976  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:26.423771  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:26.435883  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:26.435961  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:26.448506  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.460449  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:26.460506  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.472817  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:26.483653  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:26.483743  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:26.494435  185942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:26.682378  185942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:27.715587  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:29.717407  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:28.766206  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:30.766289  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.820344  185942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:20:35.820446  185942 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:20:35.820555  185942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:20:35.820688  185942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:20:35.820812  185942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:20:35.820902  185942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:20:35.823423  185942 out.go:235]   - Generating certificates and keys ...
	I1028 12:20:35.823594  185942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:20:35.823700  185942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:20:35.823804  185942 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:20:35.823893  185942 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:20:35.824001  185942 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:20:35.824082  185942 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:20:35.824167  185942 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:20:35.824255  185942 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:20:35.824360  185942 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:20:35.824445  185942 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:20:35.824504  185942 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:20:35.824566  185942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:20:35.824622  185942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:20:35.824725  185942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:20:35.824805  185942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:20:35.824944  185942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:20:35.825058  185942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:20:35.825209  185942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:20:35.825300  185942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:20:35.826890  185942 out.go:235]   - Booting up control plane ...
	I1028 12:20:35.827007  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:20:35.827077  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:20:35.827142  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:20:35.827285  185942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:20:35.827420  185942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:20:35.827487  185942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:20:35.827705  185942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:20:35.827848  185942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:20:35.827943  185942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.264999ms
	I1028 12:20:35.828059  185942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:20:35.828130  185942 kubeadm.go:310] [api-check] The API server is healthy after 5.502732581s
	I1028 12:20:35.828299  185942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:20:35.828472  185942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:20:35.828523  185942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:20:35.828712  185942 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:20:35.828764  185942 kubeadm.go:310] [bootstrap-token] Using token: srdxzz.lxk56bs7sgkeocij
	I1028 12:20:35.830228  185942 out.go:235]   - Configuring RBAC rules ...
	I1028 12:20:35.830335  185942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:20:35.830422  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:20:35.830563  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:20:35.830729  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:20:35.830842  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:20:35.830928  185942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:20:35.831065  185942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:20:35.831122  185942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:20:35.831174  185942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:20:35.831181  185942 kubeadm.go:310] 
	I1028 12:20:35.831229  185942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:20:35.831237  185942 kubeadm.go:310] 
	I1028 12:20:35.831302  185942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:20:35.831313  185942 kubeadm.go:310] 
	I1028 12:20:35.831356  185942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:20:35.831439  185942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:20:35.831517  185942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:20:35.831531  185942 kubeadm.go:310] 
	I1028 12:20:35.831616  185942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:20:35.831628  185942 kubeadm.go:310] 
	I1028 12:20:35.831678  185942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:20:35.831682  185942 kubeadm.go:310] 
	I1028 12:20:35.831730  185942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:20:35.831809  185942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:20:35.831921  185942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:20:35.831933  185942 kubeadm.go:310] 
	I1028 12:20:35.832041  185942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:20:35.832141  185942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:20:35.832150  185942 kubeadm.go:310] 
	I1028 12:20:35.832249  185942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832373  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:20:35.832404  185942 kubeadm.go:310] 	--control-plane 
	I1028 12:20:35.832414  185942 kubeadm.go:310] 
	I1028 12:20:35.832516  185942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:20:35.832524  185942 kubeadm.go:310] 
	I1028 12:20:35.832642  185942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832812  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:20:35.832833  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:20:35.832843  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:20:35.834428  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:20:35.835603  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:20:35.847857  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:20:35.867921  185942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:20:35.868088  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:35.868107  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:20:35.908233  185942 ops.go:34] apiserver oom_adj: -16
	I1028 12:20:32.215299  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:34.716880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:32.766922  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.267132  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:36.121114  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:36.621188  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.122032  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.621405  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.122105  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.621960  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.122142  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.622093  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.121643  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.287609  185942 kubeadm.go:1113] duration metric: took 4.419612649s to wait for elevateKubeSystemPrivileges
	I1028 12:20:40.287656  185942 kubeadm.go:394] duration metric: took 5m0.720591132s to StartCluster
	I1028 12:20:40.287703  185942 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.287814  185942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:20:40.290472  185942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.290787  185942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:20:40.291051  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:20:40.290926  185942 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:20:40.291125  185942 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709250"
	I1028 12:20:40.291126  185942 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709250"
	I1028 12:20:40.291142  185942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709250"
	I1028 12:20:40.291148  185942 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709250"
	W1028 12:20:40.291158  185942 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:20:40.291182  185942 addons.go:69] Setting metrics-server=true in profile "embed-certs-709250"
	I1028 12:20:40.291220  185942 addons.go:234] Setting addon metrics-server=true in "embed-certs-709250"
	W1028 12:20:40.291233  185942 addons.go:243] addon metrics-server should already be in state true
	I1028 12:20:40.291282  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291195  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291593  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291631  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291727  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291771  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291786  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291813  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.292877  185942 out.go:177] * Verifying Kubernetes components...
	I1028 12:20:40.294858  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:20:40.310225  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1028 12:20:40.310814  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.311524  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.311552  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.311961  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.312174  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.312867  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1028 12:20:40.312901  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I1028 12:20:40.313354  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313389  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313964  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.313987  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.313967  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.314040  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.314365  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314428  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314883  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.314907  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.315710  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.315744  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.316210  185942 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709250"
	W1028 12:20:40.316229  185942 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:20:40.316261  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.316619  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.316648  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.331940  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1028 12:20:40.332732  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.333487  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.333537  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.333932  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.334145  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.336054  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1028 12:20:40.336291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.336441  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337079  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.337117  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.337211  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I1028 12:20:40.337597  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337998  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338171  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.338189  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.338291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.338925  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338972  185942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:20:40.339570  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.339621  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.340197  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.341080  185942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.341099  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:20:40.341115  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.341872  185942 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:20:40.343244  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:20:40.343278  185942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:20:40.343308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.344718  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345186  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.345216  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345457  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.345666  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.345842  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.346053  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.346977  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347514  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.347546  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347739  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.347936  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.348069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.348236  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.357912  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I1028 12:20:40.358358  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.358838  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.358858  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.359224  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.359441  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.361308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.361630  185942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.361654  185942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:20:40.361675  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.365789  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366319  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.366347  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366659  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.366879  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.367069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.367245  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.526205  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:20:40.545404  185942 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555003  185942 node_ready.go:49] node "embed-certs-709250" has status "Ready":"True"
	I1028 12:20:40.555028  185942 node_ready.go:38] duration metric: took 9.592797ms for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555047  185942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:40.564021  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:40.660020  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:20:40.660061  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:20:40.666435  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.691423  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.692384  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:20:40.692411  185942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:20:40.739518  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:40.739549  185942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:20:40.765228  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:37.216347  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:39.716471  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.192384  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192422  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192491  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192514  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192740  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192759  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192783  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192791  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192915  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192942  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192951  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192962  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.193093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193125  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193131  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.193373  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193403  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193409  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.229776  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.229808  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.230111  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.230127  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.624688  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.624714  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625048  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.625055  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625066  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625074  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.625081  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625283  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625312  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625325  185942 addons.go:475] Verifying addon metrics-server=true in "embed-certs-709250"
	I1028 12:20:41.625329  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.627194  185942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:20:37.771166  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:40.265616  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.265990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.628572  185942 addons.go:510] duration metric: took 1.337655555s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:20:42.572801  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.571062  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.571095  185942 pod_ready.go:82] duration metric: took 3.007040788s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.571110  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576592  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.576620  185942 pod_ready.go:82] duration metric: took 5.500425ms for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576633  185942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:45.583586  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.216524  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:44.715547  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.758721  186547 pod_ready.go:82] duration metric: took 4m0.000295852s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:43.758758  186547 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:43.758783  186547 pod_ready.go:39] duration metric: took 4m13.710127509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:43.758811  186547 kubeadm.go:597] duration metric: took 4m21.647032906s to restartPrimaryControlPlane
	W1028 12:20:43.758873  186547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:43.758910  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:47.089478  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.089502  185942 pod_ready.go:82] duration metric: took 3.512861746s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.089512  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094229  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.094255  185942 pod_ready.go:82] duration metric: took 4.736326ms for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094267  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098823  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.098859  185942 pod_ready.go:82] duration metric: took 4.584003ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098872  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104063  185942 pod_ready.go:93] pod "kube-proxy-gck6r" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.104083  185942 pod_ready.go:82] duration metric: took 5.204526ms for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104091  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168177  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.168210  185942 pod_ready.go:82] duration metric: took 64.110225ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168221  185942 pod_ready.go:39] duration metric: took 6.613160968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:47.168243  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:20:47.168309  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:47.186907  185942 api_server.go:72] duration metric: took 6.896070864s to wait for apiserver process to appear ...
	I1028 12:20:47.186944  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:20:47.186998  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:20:47.191428  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:20:47.192677  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:20:47.192708  185942 api_server.go:131] duration metric: took 5.753471ms to wait for apiserver health ...
	I1028 12:20:47.192719  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:20:47.372534  185942 system_pods.go:59] 9 kube-system pods found
	I1028 12:20:47.372571  185942 system_pods.go:61] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.372580  185942 system_pods.go:61] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.372585  185942 system_pods.go:61] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.372590  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.372595  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.372599  185942 system_pods.go:61] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.372605  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.372614  185942 system_pods.go:61] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.372620  185942 system_pods.go:61] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.372633  185942 system_pods.go:74] duration metric: took 179.905205ms to wait for pod list to return data ...
	I1028 12:20:47.372647  185942 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:20:47.569853  185942 default_sa.go:45] found service account: "default"
	I1028 12:20:47.569886  185942 default_sa.go:55] duration metric: took 197.228265ms for default service account to be created ...
	I1028 12:20:47.569900  185942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:20:47.770906  185942 system_pods.go:86] 9 kube-system pods found
	I1028 12:20:47.770941  185942 system_pods.go:89] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.770948  185942 system_pods.go:89] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.770953  185942 system_pods.go:89] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.770956  185942 system_pods.go:89] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.770960  185942 system_pods.go:89] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.770964  185942 system_pods.go:89] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.770967  185942 system_pods.go:89] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.770973  185942 system_pods.go:89] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.770977  185942 system_pods.go:89] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.770984  185942 system_pods.go:126] duration metric: took 201.078078ms to wait for k8s-apps to be running ...
	I1028 12:20:47.770990  185942 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:20:47.771033  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:47.787139  185942 system_svc.go:56] duration metric: took 16.13776ms WaitForService to wait for kubelet
	I1028 12:20:47.787171  185942 kubeadm.go:582] duration metric: took 7.496343244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:20:47.787191  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:20:47.969485  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:20:47.969516  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:20:47.969547  185942 node_conditions.go:105] duration metric: took 182.350787ms to run NodePressure ...
	I1028 12:20:47.969562  185942 start.go:241] waiting for startup goroutines ...
	I1028 12:20:47.969572  185942 start.go:246] waiting for cluster config update ...
	I1028 12:20:47.969586  185942 start.go:255] writing updated cluster config ...
	I1028 12:20:47.969916  185942 ssh_runner.go:195] Run: rm -f paused
	I1028 12:20:48.021806  185942 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:20:48.023816  185942 out.go:177] * Done! kubectl is now configured to use "embed-certs-709250" cluster and "default" namespace by default
	I1028 12:20:46.716844  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:49.216673  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:51.715101  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:53.715509  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:56.217201  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:58.715405  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:00.715890  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:03.214669  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:05.215054  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.108895  186547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.349960271s)
	I1028 12:21:10.108979  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:10.126064  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:21:10.139862  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:21:10.150752  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:21:10.150780  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:21:10.150837  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:21:10.161522  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:21:10.161604  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:21:10.172230  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:21:10.183231  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:21:10.183299  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:21:10.194261  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.204462  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:21:10.204524  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.214991  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:21:10.225246  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:21:10.225315  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:21:10.235439  186547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:21:10.280951  186547 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:21:10.281020  186547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:21:10.391997  186547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:21:10.392163  186547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:21:10.392297  186547 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:21:10.402113  186547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:21:07.217549  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:09.716985  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.404087  186547 out.go:235]   - Generating certificates and keys ...
	I1028 12:21:10.404194  186547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:21:10.404312  186547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:21:10.404441  186547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:21:10.404537  186547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:21:10.404642  186547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:21:10.404719  186547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:21:10.404824  186547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:21:10.404914  186547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:21:10.405021  186547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:21:10.405124  186547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:21:10.405185  186547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:21:10.405269  186547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:21:10.608657  186547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:21:10.910608  186547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:21:11.076768  186547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:21:11.244109  186547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:21:11.685910  186547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:21:11.686470  186547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:21:11.692266  186547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:21:11.694100  186547 out.go:235]   - Booting up control plane ...
	I1028 12:21:11.694231  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:21:11.694377  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:21:11.694607  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:21:11.713908  186547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:21:11.720788  186547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:21:11.720874  186547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:21:11.856867  186547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:21:11.856998  186547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:21:12.358968  186547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942759ms
	I1028 12:21:12.359067  186547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:21:12.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:14.208408  185546 pod_ready.go:82] duration metric: took 4m0.000135609s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:21:14.208447  185546 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:21:14.208457  185546 pod_ready.go:39] duration metric: took 4m3.200735753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:14.208485  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:14.208519  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:14.208571  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:14.266154  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.266184  185546 cri.go:89] found id: ""
	I1028 12:21:14.266196  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:14.266255  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.271416  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:14.271497  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:14.310426  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.310457  185546 cri.go:89] found id: ""
	I1028 12:21:14.310467  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:14.310529  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.314961  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:14.315037  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:14.362502  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.362530  185546 cri.go:89] found id: ""
	I1028 12:21:14.362540  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:14.362602  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.368118  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:14.368198  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:14.416827  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.416867  185546 cri.go:89] found id: ""
	I1028 12:21:14.416877  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:14.416943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.421640  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:14.421716  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:14.473506  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:14.473552  185546 cri.go:89] found id: ""
	I1028 12:21:14.473563  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:14.473627  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.480106  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:14.480183  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:14.529939  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:14.529964  185546 cri.go:89] found id: ""
	I1028 12:21:14.529971  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:14.530120  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.536199  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:14.536264  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:14.578374  185546 cri.go:89] found id: ""
	I1028 12:21:14.578407  185546 logs.go:282] 0 containers: []
	W1028 12:21:14.578419  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:14.578428  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:14.578490  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:14.620216  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:14.620243  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:14.620249  185546 cri.go:89] found id: ""
	I1028 12:21:14.620258  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:14.620323  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.625798  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.630653  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:14.630683  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:14.645364  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:14.645404  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.686202  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:14.686234  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.730094  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:14.730125  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:14.786272  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:14.786322  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:14.875705  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:14.875746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.931913  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:14.931960  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.991914  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:14.991953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:15.037022  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:15.037056  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:15.107597  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:15.107649  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:15.161401  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:15.161442  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:15.201916  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:15.201953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:15.682647  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:15.682694  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:17.861193  186547 kubeadm.go:310] [api-check] The API server is healthy after 5.502448006s
	I1028 12:21:17.874856  186547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:21:17.889216  186547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:21:17.933411  186547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:21:17.933726  186547 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-349222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:21:17.964667  186547 kubeadm.go:310] [bootstrap-token] Using token: o3vo7c.1x7759cggrb8kl7r
	I1028 12:21:17.966405  186547 out.go:235]   - Configuring RBAC rules ...
	I1028 12:21:17.966590  186547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:21:17.982231  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:21:17.991850  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:21:17.996073  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:21:18.003531  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:21:18.008369  186547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:21:18.272751  186547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:21:18.724493  186547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:21:19.269583  186547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:21:19.270654  186547 kubeadm.go:310] 
	I1028 12:21:19.270715  186547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:21:19.270722  186547 kubeadm.go:310] 
	I1028 12:21:19.270782  186547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:21:19.270787  186547 kubeadm.go:310] 
	I1028 12:21:19.270816  186547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:21:19.270875  186547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:21:19.270938  186547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:21:19.270949  186547 kubeadm.go:310] 
	I1028 12:21:19.271022  186547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:21:19.271063  186547 kubeadm.go:310] 
	I1028 12:21:19.271165  186547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:21:19.271190  186547 kubeadm.go:310] 
	I1028 12:21:19.271266  186547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:21:19.271380  186547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:21:19.271470  186547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:21:19.271479  186547 kubeadm.go:310] 
	I1028 12:21:19.271600  186547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:21:19.271697  186547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:21:19.271709  186547 kubeadm.go:310] 
	I1028 12:21:19.271838  186547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272010  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:21:19.272068  186547 kubeadm.go:310] 	--control-plane 
	I1028 12:21:19.272079  186547 kubeadm.go:310] 
	I1028 12:21:19.272250  186547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:21:19.272270  186547 kubeadm.go:310] 
	I1028 12:21:19.272391  186547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272568  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:21:19.273899  186547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:21:19.273955  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:21:19.273977  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:21:19.275868  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:21:18.355132  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:18.373260  185546 api_server.go:72] duration metric: took 4m14.615888944s to wait for apiserver process to appear ...
	I1028 12:21:18.373292  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:18.373353  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:18.373410  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:18.413207  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.413239  185546 cri.go:89] found id: ""
	I1028 12:21:18.413250  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:18.413336  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.419588  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:18.419655  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:18.476341  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.476373  185546 cri.go:89] found id: ""
	I1028 12:21:18.476383  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:18.476450  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.482835  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:18.482926  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:18.524934  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.524964  185546 cri.go:89] found id: ""
	I1028 12:21:18.524975  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:18.525040  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.530198  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:18.530284  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:18.577310  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:18.577338  185546 cri.go:89] found id: ""
	I1028 12:21:18.577349  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:18.577413  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.583048  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:18.583133  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:18.622556  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:18.622587  185546 cri.go:89] found id: ""
	I1028 12:21:18.622598  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:18.622701  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.628450  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:18.628540  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:18.674827  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:18.674861  185546 cri.go:89] found id: ""
	I1028 12:21:18.674873  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:18.674943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.680282  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:18.680354  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:18.738014  185546 cri.go:89] found id: ""
	I1028 12:21:18.738044  185546 logs.go:282] 0 containers: []
	W1028 12:21:18.738061  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:18.738070  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:18.738142  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:18.780615  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:18.780645  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:18.780651  185546 cri.go:89] found id: ""
	I1028 12:21:18.780660  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:18.780725  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.786003  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.790208  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:18.790231  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:18.806481  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:18.806523  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.853343  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:18.853382  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.906386  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:18.906424  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.948149  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:18.948182  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:19.000642  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:19.000678  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:19.038715  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:19.038744  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:19.079234  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:19.079271  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:19.147309  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:19.147349  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:19.271582  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:19.271620  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:19.319149  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:19.319195  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:19.385399  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:19.385437  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:19.811993  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:19.812035  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:19.277402  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:21:19.296307  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:21:19.323315  186547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-349222 minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=default-k8s-diff-port-349222 minikube.k8s.io/primary=true
	I1028 12:21:19.550855  186547 ops.go:34] apiserver oom_adj: -16
	I1028 12:21:19.550882  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.051004  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.551001  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.051215  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.551283  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.050989  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.551423  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.051101  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.151453  186547 kubeadm.go:1113] duration metric: took 3.828156807s to wait for elevateKubeSystemPrivileges
	I1028 12:21:23.151505  186547 kubeadm.go:394] duration metric: took 5m1.103220882s to StartCluster
	I1028 12:21:23.151530  186547 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.151623  186547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:21:23.153557  186547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.153874  186547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:21:23.153996  186547 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:21:23.154101  186547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154122  186547 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154133  186547 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:21:23.154128  186547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154165  186547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-349222"
	I1028 12:21:23.154160  186547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154197  186547 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154213  186547 addons.go:243] addon metrics-server should already be in state true
	I1028 12:21:23.154167  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154254  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154664  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154679  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154749  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154135  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:21:23.154803  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154844  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154948  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.155649  186547 out.go:177] * Verifying Kubernetes components...
	I1028 12:21:23.157234  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:21:23.172278  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I1028 12:21:23.172870  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.173402  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.173429  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.173851  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.174056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.176299  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1028 12:21:23.176307  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I1028 12:21:23.176897  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177023  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177553  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177576  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177589  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177606  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177887  186547 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.177912  186547 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:21:23.177945  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.177971  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178030  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178369  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178404  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178541  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178572  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178961  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.179002  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.196089  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1028 12:21:23.197979  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.198578  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.198607  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.199082  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.199301  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.199604  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1028 12:21:23.200120  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.200519  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.200539  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.200938  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.201204  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.201711  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.201794  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1028 12:21:23.202225  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.202937  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.202956  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.203305  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.203753  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.203791  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.204026  186547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:21:23.204210  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.205470  186547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:21:23.205490  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:21:23.205554  186547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:21:23.205576  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.207334  186547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.207352  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:21:23.207372  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.209573  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.210230  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210366  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.210608  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.210806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.211061  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.211884  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.211910  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.211928  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.212104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.212351  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.212570  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.212762  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.231664  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1028 12:21:23.232283  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.232904  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.232929  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.233414  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.233658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.236162  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.236665  186547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.236680  186547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:21:23.236700  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.240368  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.240697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240848  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.241034  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.241156  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.241281  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.409461  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:21:23.430686  186547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442439  186547 node_ready.go:49] node "default-k8s-diff-port-349222" has status "Ready":"True"
	I1028 12:21:23.442466  186547 node_ready.go:38] duration metric: took 11.749381ms for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442480  186547 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:23.447741  186547 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:23.515393  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.545556  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.575253  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:21:23.575280  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:21:23.663892  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:21:23.663920  186547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:21:23.745621  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:23.745656  186547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:21:23.823360  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:24.391754  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.391789  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.392092  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.392112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.392123  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.392130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393697  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393716  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.393725  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.393733  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393810  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393828  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393886  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394088  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.394112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.413957  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.414000  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.414363  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.414385  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853053  186547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029641945s)
	I1028 12:21:24.853107  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853123  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853434  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.853492  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853501  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853518  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853543  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853784  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853801  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853813  186547 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-349222"
	I1028 12:21:24.855707  186547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:21:22.373623  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:21:22.379559  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:21:22.380750  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:22.380772  185546 api_server.go:131] duration metric: took 4.007460794s to wait for apiserver health ...
	I1028 12:21:22.380783  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:22.380811  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:22.380875  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:22.426678  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:22.426710  185546 cri.go:89] found id: ""
	I1028 12:21:22.426720  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:22.426781  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.431942  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:22.432014  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:22.472504  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:22.472531  185546 cri.go:89] found id: ""
	I1028 12:21:22.472540  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:22.472595  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.478446  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:22.478511  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:22.520149  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.520169  185546 cri.go:89] found id: ""
	I1028 12:21:22.520177  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:22.520235  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.525716  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:22.525804  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:22.564801  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:22.564832  185546 cri.go:89] found id: ""
	I1028 12:21:22.564844  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:22.564909  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.570065  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:22.570147  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:22.613601  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.613628  185546 cri.go:89] found id: ""
	I1028 12:21:22.613637  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:22.613700  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.618413  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:22.618483  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:22.664329  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.664358  185546 cri.go:89] found id: ""
	I1028 12:21:22.664369  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:22.664430  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.669013  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:22.669084  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:22.706046  185546 cri.go:89] found id: ""
	I1028 12:21:22.706074  185546 logs.go:282] 0 containers: []
	W1028 12:21:22.706084  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:22.706091  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:22.706159  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:22.747718  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.747744  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.747750  185546 cri.go:89] found id: ""
	I1028 12:21:22.747759  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:22.747825  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.752857  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.758383  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:22.758410  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.800846  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:22.800882  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.858663  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:22.858702  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.896915  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:22.896959  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.938476  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:22.938503  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.984601  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:22.984628  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:23.000223  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:23.000259  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:23.130709  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:23.130746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:23.189821  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:23.189859  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:23.244463  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:23.244535  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:23.299279  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:23.299318  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:23.714691  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:23.714730  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:23.777703  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:23.777749  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:26.364133  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:21:26.364166  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.364171  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.364175  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.364179  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.364182  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.364185  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.364191  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.364195  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.364201  185546 system_pods.go:74] duration metric: took 3.98341316s to wait for pod list to return data ...
	I1028 12:21:26.364209  185546 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:26.366899  185546 default_sa.go:45] found service account: "default"
	I1028 12:21:26.366925  185546 default_sa.go:55] duration metric: took 2.710943ms for default service account to be created ...
	I1028 12:21:26.366934  185546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:26.371193  185546 system_pods.go:86] 8 kube-system pods found
	I1028 12:21:26.371219  185546 system_pods.go:89] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.371224  185546 system_pods.go:89] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.371228  185546 system_pods.go:89] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.371233  185546 system_pods.go:89] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.371237  185546 system_pods.go:89] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.371240  185546 system_pods.go:89] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.371246  185546 system_pods.go:89] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.371250  185546 system_pods.go:89] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.371257  185546 system_pods.go:126] duration metric: took 4.318058ms to wait for k8s-apps to be running ...
	I1028 12:21:26.371265  185546 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:26.371317  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:26.389093  185546 system_svc.go:56] duration metric: took 17.81758ms WaitForService to wait for kubelet
	I1028 12:21:26.389131  185546 kubeadm.go:582] duration metric: took 4m22.631766189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:26.389158  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:26.392700  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:26.392728  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:26.392741  185546 node_conditions.go:105] duration metric: took 3.576663ms to run NodePressure ...
	I1028 12:21:26.392757  185546 start.go:241] waiting for startup goroutines ...
	I1028 12:21:26.392766  185546 start.go:246] waiting for cluster config update ...
	I1028 12:21:26.392781  185546 start.go:255] writing updated cluster config ...
	I1028 12:21:26.393086  185546 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:26.444274  185546 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:26.446322  185546 out.go:177] * Done! kubectl is now configured to use "no-preload-871884" cluster and "default" namespace by default
	I1028 12:21:24.856866  186547 addons.go:510] duration metric: took 1.702877543s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:21:25.462800  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:27.954511  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:30.454530  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.455161  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.955218  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.955242  186547 pod_ready.go:82] duration metric: took 9.507473956s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.955253  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.960990  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.961018  186547 pod_ready.go:82] duration metric: took 5.757431ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.961032  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966957  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.966981  186547 pod_ready.go:82] duration metric: took 5.940549ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966991  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972168  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.972194  186547 pod_ready.go:82] duration metric: took 5.195057ms for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972205  186547 pod_ready.go:39] duration metric: took 9.529713389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:32.972224  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:32.972294  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:32.988675  186547 api_server.go:72] duration metric: took 9.83476496s to wait for apiserver process to appear ...
	I1028 12:21:32.988711  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:32.988736  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:21:32.993068  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:21:32.994352  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:32.994377  186547 api_server.go:131] duration metric: took 5.656136ms to wait for apiserver health ...
	I1028 12:21:32.994387  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:32.999982  186547 system_pods.go:59] 9 kube-system pods found
	I1028 12:21:33.000010  186547 system_pods.go:61] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.000017  186547 system_pods.go:61] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.000024  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.000029  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.000033  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.000037  186547 system_pods.go:61] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.000040  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.000046  186547 system_pods.go:61] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.000051  186547 system_pods.go:61] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.000064  186547 system_pods.go:74] duration metric: took 5.66991ms to wait for pod list to return data ...
	I1028 12:21:33.000075  186547 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:33.003124  186547 default_sa.go:45] found service account: "default"
	I1028 12:21:33.003149  186547 default_sa.go:55] duration metric: took 3.067652ms for default service account to be created ...
	I1028 12:21:33.003159  186547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:33.155864  186547 system_pods.go:86] 9 kube-system pods found
	I1028 12:21:33.155902  186547 system_pods.go:89] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.155914  186547 system_pods.go:89] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.155921  186547 system_pods.go:89] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.155931  186547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.155938  186547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.155943  186547 system_pods.go:89] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.155948  186547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.155956  186547 system_pods.go:89] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.155965  186547 system_pods.go:89] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.155977  186547 system_pods.go:126] duration metric: took 152.809784ms to wait for k8s-apps to be running ...
	I1028 12:21:33.155991  186547 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:33.156049  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:33.171592  186547 system_svc.go:56] duration metric: took 15.589436ms WaitForService to wait for kubelet
	I1028 12:21:33.171647  186547 kubeadm.go:582] duration metric: took 10.017726239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:33.171672  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:33.352932  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:33.352984  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:33.352995  186547 node_conditions.go:105] duration metric: took 181.317488ms to run NodePressure ...
	I1028 12:21:33.353006  186547 start.go:241] waiting for startup goroutines ...
	I1028 12:21:33.353014  186547 start.go:246] waiting for cluster config update ...
	I1028 12:21:33.353024  186547 start.go:255] writing updated cluster config ...
	I1028 12:21:33.353314  186547 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:33.405276  186547 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:33.407589  186547 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-349222" cluster and "default" namespace by default
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.443745452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118628443724306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc5a9a5a-0ed2-43b5-8173-9511ef0f5410 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.444432951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8751cdf9-9b12-413d-b8a3-4e57b267b4d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.444511769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8751cdf9-9b12-413d-b8a3-4e57b267b4d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.444706549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8751cdf9-9b12-413d-b8a3-4e57b267b4d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.486276676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73973e6e-4f87-4531-ab7c-6efa2940d19e name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.486366507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73973e6e-4f87-4531-ab7c-6efa2940d19e name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.487396225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9349cb2d-442a-4535-a61b-7f22110db734 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.488420483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118628487721632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9349cb2d-442a-4535-a61b-7f22110db734 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.489599308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da4f609d-327b-4f0a-8784-b36959312ac9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.489654238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da4f609d-327b-4f0a-8784-b36959312ac9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.490745386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da4f609d-327b-4f0a-8784-b36959312ac9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.532433686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67567971-089a-40c4-ba06-2bc7a77f4f4a name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.532514390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67567971-089a-40c4-ba06-2bc7a77f4f4a name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.533582252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8f8ba21-23fa-4194-9ed8-4f734ac0c7b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.533922639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118628533900781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8f8ba21-23fa-4194-9ed8-4f734ac0c7b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.534668807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b6b9775-f63d-46aa-8627-5b926f78c188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.534741796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b6b9775-f63d-46aa-8627-5b926f78c188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.534933261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b6b9775-f63d-46aa-8627-5b926f78c188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.569144631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de33614c-bde6-4ea0-8260-1f896ba2fa41 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.569222643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de33614c-bde6-4ea0-8260-1f896ba2fa41 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.570364711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bd347c9-ebcb-4ebe-a243-ea7c4821311d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.570693172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118628570672861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bd347c9-ebcb-4ebe-a243-ea7c4821311d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.571209040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea112c9b-b8f3-47c5-af78-c304f732b790 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.571259441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea112c9b-b8f3-47c5-af78-c304f732b790 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:28 no-preload-871884 crio[701]: time="2024-10-28 12:30:28.571458604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea112c9b-b8f3-47c5-af78-c304f732b790 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8be2c80f222fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   da89f953a1d95       storage-provisioner
	1cfd0b1cfaed8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3de4d0044ee15       busybox
	9a21fcd9e6d82       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   0d6cfae4d63d5       coredns-7c65d6cfc9-dg2jd
	3576b8af85140       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   da89f953a1d95       storage-provisioner
	1edb7fc86811a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   6acfae32b1e72       kube-proxy-6rc4l
	d66cdd02dd211       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   317e986bd949f       etcd-no-preload-871884
	9473dbbdab672       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   bee90f9d94d0f       kube-scheduler-no-preload-871884
	6d5abde055384       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   789363630bf5c       kube-apiserver-no-preload-871884
	16a1ce9b3f38f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   684d536158c9e       kube-controller-manager-no-preload-871884
	
	
	==> coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45001 - 56249 "HINFO IN 2374450671205517086.5057763595071633460. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028030753s
	
	
	==> describe nodes <==
	Name:               no-preload-871884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-871884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=no-preload-871884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_07_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:07:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-871884
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:27:42 +0000   Mon, 28 Oct 2024 12:07:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:27:42 +0000   Mon, 28 Oct 2024 12:07:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:27:42 +0000   Mon, 28 Oct 2024 12:07:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:27:42 +0000   Mon, 28 Oct 2024 12:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.156
	  Hostname:    no-preload-871884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac90635fe0f24ef7972af2d0c7fd5465
	  System UUID:                ac90635f-e0f2-4ef7-972a-f2d0c7fd5465
	  Boot ID:                    82ccc450-12db-4ea8-95eb-2b73f2d929bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-dg2jd                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-no-preload-871884                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-no-preload-871884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-no-preload-871884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-6rc4l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-no-preload-871884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-xr9lt              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-871884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node no-preload-871884 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-871884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-871884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-871884 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-871884 event: Registered Node no-preload-871884 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-871884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-871884 event: Registered Node no-preload-871884 in Controller
	
	
	==> dmesg <==
	[Oct28 12:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056198] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.203142] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.829748] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.647218] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.292031] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.070650] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068809] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.180329] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.131699] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.318490] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[ +16.558484] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.068179] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.836773] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[Oct28 12:17] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.415196] systemd-fstab-generator[2057]: Ignoring "noauto" option for root device
	[  +3.249930] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.089529] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] <==
	{"level":"info","ts":"2024-10-28T12:16:57.705288Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:16:57.714615Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T12:16:57.715172Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"35a4f6e665918b01","initial-advertise-peer-urls":["https://192.168.72.156:2380"],"listen-peer-urls":["https://192.168.72.156:2380"],"advertise-client-urls":["https://192.168.72.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:16:57.715275Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:16:57.715503Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.156:2380"}
	{"level":"info","ts":"2024-10-28T12:16:57.715588Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.156:2380"}
	{"level":"info","ts":"2024-10-28T12:16:58.830515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35a4f6e665918b01 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-28T12:16:58.830593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35a4f6e665918b01 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:16:58.830619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35a4f6e665918b01 received MsgPreVoteResp from 35a4f6e665918b01 at term 2"}
	{"level":"info","ts":"2024-10-28T12:16:58.830631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35a4f6e665918b01 became candidate at term 3"}
	{"level":"info","ts":"2024-10-28T12:16:58.830637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35a4f6e665918b01 received MsgVoteResp from 35a4f6e665918b01 at term 3"}
	{"level":"info","ts":"2024-10-28T12:16:58.830646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"35a4f6e665918b01 became leader at term 3"}
	{"level":"info","ts":"2024-10-28T12:16:58.830671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 35a4f6e665918b01 elected leader 35a4f6e665918b01 at term 3"}
	{"level":"info","ts":"2024-10-28T12:16:58.832274Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:16:58.832466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:16:58.832738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:16:58.832769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:16:58.832279Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"35a4f6e665918b01","local-member-attributes":"{Name:no-preload-871884 ClientURLs:[https://192.168.72.156:2379]}","request-path":"/0/members/35a4f6e665918b01/attributes","cluster-id":"6558905c6662fd32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:16:58.833446Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:16:58.833470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:16:58.834386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:16:58.834700Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.156:2379"}
	{"level":"info","ts":"2024-10-28T12:26:58.860729Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":884}
	{"level":"info","ts":"2024-10-28T12:26:58.877475Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":884,"took":"16.238822ms","hash":1806140554,"current-db-size-bytes":2682880,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2682880,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-28T12:26:58.877567Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1806140554,"revision":884,"compact-revision":-1}
	
	
	==> kernel <==
	 12:30:28 up 14 min,  0 users,  load average: 0.05, 0.11, 0.09
	Linux no-preload-871884 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] <==
	W1028 12:27:01.167408       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:27:01.167492       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:27:01.168683       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:27:01.168704       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:28:01.168963       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:28:01.169159       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 12:28:01.169244       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:28:01.169281       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 12:28:01.170343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:28:01.170620       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:30:01.170919       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 12:30:01.170977       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:30:01.171372       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 12:30:01.171454       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:30:01.172602       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:30:01.172740       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] <==
	E1028 12:25:03.885949       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:25:04.328924       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:25:33.892576       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:25:34.336946       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:26:03.898908       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:26:04.346168       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:26:33.905489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:26:34.354312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:27:03.911539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:27:04.365832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:27:33.918075       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:27:34.374558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:27:42.643720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-871884"
	E1028 12:28:03.924796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:28:04.383539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:28:14.091906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="313.359µs"
	I1028 12:28:26.083903       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="186.508µs"
	E1028 12:28:33.931318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:28:34.391847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:29:03.938448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:29:04.399690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:29:33.944856       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:29:34.407659       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:30:03.952248       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:30:04.415861       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:17:01.779432       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:17:01.792752       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.156"]
	E1028 12:17:01.792840       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:17:01.831185       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:17:01.831230       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:17:01.831266       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:17:01.833698       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:17:01.834202       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:17:01.834238       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:17:01.836164       1 config.go:199] "Starting service config controller"
	I1028 12:17:01.836204       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:17:01.836234       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:17:01.836258       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:17:01.837063       1 config.go:328] "Starting node config controller"
	I1028 12:17:01.837095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:17:01.937063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 12:17:01.937115       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:17:01.937126       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] <==
	I1028 12:16:58.062633       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:17:00.173613       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:17:00.173719       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:17:00.173752       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:17:00.173775       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:17:00.192176       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:17:00.192270       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:17:00.194383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:17:00.194557       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:17:00.194620       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:17:00.194692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:17:00.295175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:29:16 no-preload-871884 kubelet[1427]: E1028 12:29:16.260959    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118556260222339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:22 no-preload-871884 kubelet[1427]: E1028 12:29:22.065489    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:29:26 no-preload-871884 kubelet[1427]: E1028 12:29:26.263564    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118566261836378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:26 no-preload-871884 kubelet[1427]: E1028 12:29:26.264583    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118566261836378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:36 no-preload-871884 kubelet[1427]: E1028 12:29:36.066696    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:29:36 no-preload-871884 kubelet[1427]: E1028 12:29:36.266129    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118576265795052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:36 no-preload-871884 kubelet[1427]: E1028 12:29:36.266198    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118576265795052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:46 no-preload-871884 kubelet[1427]: E1028 12:29:46.267506    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118586267259777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:46 no-preload-871884 kubelet[1427]: E1028 12:29:46.267548    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118586267259777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:50 no-preload-871884 kubelet[1427]: E1028 12:29:50.065882    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]: E1028 12:29:56.088428    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]: E1028 12:29:56.269500    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118596269202032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:56 no-preload-871884 kubelet[1427]: E1028 12:29:56.269548    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118596269202032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:03 no-preload-871884 kubelet[1427]: E1028 12:30:03.065742    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:30:06 no-preload-871884 kubelet[1427]: E1028 12:30:06.272365    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118606271920449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:06 no-preload-871884 kubelet[1427]: E1028 12:30:06.272393    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118606271920449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:16 no-preload-871884 kubelet[1427]: E1028 12:30:16.273755    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118616273423571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:16 no-preload-871884 kubelet[1427]: E1028 12:30:16.273813    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118616273423571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:17 no-preload-871884 kubelet[1427]: E1028 12:30:17.065626    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:30:26 no-preload-871884 kubelet[1427]: E1028 12:30:26.275394    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118626274900826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:26 no-preload-871884 kubelet[1427]: E1028 12:30:26.275754    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118626274900826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] <==
	I1028 12:17:01.751658       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 12:17:31.754707       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] <==
	I1028 12:17:32.376322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:17:32.399429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:17:32.399510       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:17:49.801847       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:17:49.802133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-871884_69447087-feff-4949-a3e4-b8b1c4a352ae!
	I1028 12:17:49.802315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcd4864f-0556-4b38-ba15-d73472c15cbf", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-871884_69447087-feff-4949-a3e4-b8b1c4a352ae became leader
	I1028 12:17:49.904977       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-871884_69447087-feff-4949-a3e4-b8b1c4a352ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-871884 -n no-preload-871884
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-871884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xr9lt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-871884 describe pod metrics-server-6867b74b74-xr9lt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-871884 describe pod metrics-server-6867b74b74-xr9lt: exit status 1 (63.96457ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xr9lt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-871884 describe pod metrics-server-6867b74b74-xr9lt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1028 12:22:38.998090  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-28 12:30:33.97935704 +0000 UTC m=+5756.647581427
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-349222 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-349222 logs -n 25: (2.21685511s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:13:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:13:02.452508  186547 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:13:02.452621  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452630  186547 out.go:358] Setting ErrFile to fd 2...
	I1028 12:13:02.452635  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452828  186547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:13:02.453378  186547 out.go:352] Setting JSON to false
	I1028 12:13:02.454320  186547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6925,"bootTime":1730110657,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:13:02.454420  186547 start.go:139] virtualization: kvm guest
	I1028 12:13:02.456754  186547 out.go:177] * [default-k8s-diff-port-349222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:13:02.458343  186547 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:13:02.458413  186547 notify.go:220] Checking for updates...
	I1028 12:13:02.460946  186547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:13:02.462089  186547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:13:02.463460  186547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:13:02.464649  186547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:13:02.466107  186547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:13:02.468142  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:13:02.468518  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.468587  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.483793  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1028 12:13:02.484273  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.484861  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.484884  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.485260  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.485471  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.485712  186547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:13:02.485997  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.486030  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.501110  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1028 12:13:02.501722  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.502335  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.502362  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.502682  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.502878  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.539766  186547 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:13:02.541024  186547 start.go:297] selected driver: kvm2
	I1028 12:13:02.541038  186547 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.541168  186547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:13:02.541929  186547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.542014  186547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:13:02.557443  186547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:13:02.557868  186547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:13:02.557902  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:13:02.557947  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:13:02.557987  186547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.558098  186547 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.560651  186547 out.go:177] * Starting "default-k8s-diff-port-349222" primary control-plane node in "default-k8s-diff-port-349222" cluster
	I1028 12:13:02.693744  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:02.561767  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:13:02.561800  186547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:13:02.561810  186547 cache.go:56] Caching tarball of preloaded images
	I1028 12:13:02.561877  186547 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:13:02.561887  186547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:13:02.561973  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:13:02.562165  186547 start.go:360] acquireMachinesLock for default-k8s-diff-port-349222: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:13:08.773770  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:11.845825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:17.925957  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:20.997733  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:27.077858  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:30.149737  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:36.229851  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:39.301764  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:45.381781  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:48.453767  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:54.533793  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:57.605754  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:03.685848  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:06.757874  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:12.837829  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:15.909778  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:21.989850  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:25.061812  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:31.141825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:34.213757  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:40.293844  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:43.365865  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:49.445872  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:52.517750  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:58.597834  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:01.669837  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:07.749853  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:10.821838  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:13.826298  185942 start.go:364] duration metric: took 3m37.788021766s to acquireMachinesLock for "embed-certs-709250"
	I1028 12:15:13.826369  185942 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:13.826382  185942 fix.go:54] fixHost starting: 
	I1028 12:15:13.827047  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:13.827113  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:13.842889  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1028 12:15:13.843403  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:13.843915  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:15:13.843938  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:13.844374  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:13.844568  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:13.844733  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:15:13.846440  185942 fix.go:112] recreateIfNeeded on embed-certs-709250: state=Stopped err=<nil>
	I1028 12:15:13.846464  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	W1028 12:15:13.846629  185942 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:13.848878  185942 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709250" ...
	I1028 12:15:13.850607  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Start
	I1028 12:15:13.850800  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring networks are active...
	I1028 12:15:13.851930  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network default is active
	I1028 12:15:13.852331  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network mk-embed-certs-709250 is active
	I1028 12:15:13.852652  185942 main.go:141] libmachine: (embed-certs-709250) Getting domain xml...
	I1028 12:15:13.853394  185942 main.go:141] libmachine: (embed-certs-709250) Creating domain...
	I1028 12:15:15.098667  185942 main.go:141] libmachine: (embed-certs-709250) Waiting to get IP...
	I1028 12:15:15.099525  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.099919  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.099951  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.099877  187018 retry.go:31] will retry after 285.25732ms: waiting for machine to come up
	I1028 12:15:15.386531  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.386992  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.387023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.386921  187018 retry.go:31] will retry after 327.08041ms: waiting for machine to come up
	I1028 12:15:15.715435  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.715900  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.715928  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.715846  187018 retry.go:31] will retry after 443.083162ms: waiting for machine to come up
	I1028 12:15:13.823652  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:13.823724  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824056  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:15:13.824085  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824284  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:15:13.826158  185546 machine.go:96] duration metric: took 4m37.413470632s to provisionDockerMachine
	I1028 12:15:13.826202  185546 fix.go:56] duration metric: took 4m37.436313043s for fixHost
	I1028 12:15:13.826208  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 4m37.436350273s
	W1028 12:15:13.826226  185546 start.go:714] error starting host: provision: host is not running
	W1028 12:15:13.826336  185546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 12:15:13.826346  185546 start.go:729] Will try again in 5 seconds ...
	I1028 12:15:16.160595  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.161024  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.161045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.161003  187018 retry.go:31] will retry after 599.535995ms: waiting for machine to come up
	I1028 12:15:16.761771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.762167  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.762213  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.762114  187018 retry.go:31] will retry after 527.275961ms: waiting for machine to come up
	I1028 12:15:17.290788  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:17.291124  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:17.291145  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:17.291098  187018 retry.go:31] will retry after 858.175967ms: waiting for machine to come up
	I1028 12:15:18.150644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.151045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.151071  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.150996  187018 retry.go:31] will retry after 727.962346ms: waiting for machine to come up
	I1028 12:15:18.880545  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.880990  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.881020  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.880942  187018 retry.go:31] will retry after 1.184956373s: waiting for machine to come up
	I1028 12:15:20.067178  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:20.067603  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:20.067635  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:20.067553  187018 retry.go:31] will retry after 1.635056202s: waiting for machine to come up
	I1028 12:15:18.827987  185546 start.go:360] acquireMachinesLock for no-preload-871884: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:21.703941  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:21.704338  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:21.704365  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:21.704302  187018 retry.go:31] will retry after 1.865473383s: waiting for machine to come up
	I1028 12:15:23.572362  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:23.572816  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:23.572843  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:23.572773  187018 retry.go:31] will retry after 2.604970031s: waiting for machine to come up
	I1028 12:15:26.181289  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:26.181849  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:26.181880  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:26.181788  187018 retry.go:31] will retry after 2.866004055s: waiting for machine to come up
	I1028 12:15:29.049604  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:29.050029  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:29.050068  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:29.049970  187018 retry.go:31] will retry after 3.046879869s: waiting for machine to come up
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:32.100252  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.100812  185942 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:15:32.100831  185942 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:15:32.100842  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.101552  185942 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:15:32.101568  185942 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:15:32.101602  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.101629  185942 main.go:141] libmachine: (embed-certs-709250) DBG | skip adding static IP to network mk-embed-certs-709250 - found existing host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"}
	I1028 12:15:32.101644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:15:32.104041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.104356  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104459  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:15:32.104488  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:15:32.104519  185942 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:32.104530  185942 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:15:32.104538  185942 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:15:32.233966  185942 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:32.234363  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:15:32.235010  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.237443  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.237755  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.237783  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.238040  185942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:15:32.238272  185942 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:32.238291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:32.238541  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.240765  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241165  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.241212  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241333  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.241520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241704  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241836  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.241989  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.242234  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.242247  185942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:32.358412  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:32.358443  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.358773  185942 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:15:32.358810  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.359027  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.361776  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362122  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.362161  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362262  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.362429  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362579  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362709  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.362867  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.363084  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.363098  185942 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:15:32.492437  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:15:32.492466  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.495108  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495394  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.495438  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495586  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.495771  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.495927  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.496054  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.496215  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.496399  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.496416  185942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:32.619038  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:32.619074  185942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:32.619113  185942 buildroot.go:174] setting up certificates
	I1028 12:15:32.619125  185942 provision.go:84] configureAuth start
	I1028 12:15:32.619137  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.619451  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.622055  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622448  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.622479  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622593  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.624610  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625037  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.625066  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625086  185942 provision.go:143] copyHostCerts
	I1028 12:15:32.625174  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:32.625190  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:32.625259  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:32.625396  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:32.625407  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:32.625444  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:32.625519  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:32.625541  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:32.625575  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:32.625645  185942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:15:32.684483  185942 provision.go:177] copyRemoteCerts
	I1028 12:15:32.684547  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:32.684576  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.686926  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687244  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.687284  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687427  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.687617  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.687744  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.687890  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:32.776282  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:32.802180  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:15:32.829609  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:32.854274  185942 provision.go:87] duration metric: took 235.133526ms to configureAuth
	I1028 12:15:32.854305  185942 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:32.854474  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:15:32.854547  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.857363  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.857736  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.857771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.858038  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.858251  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858442  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858652  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.858809  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.858979  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.858996  185942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:33.101841  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:33.101870  185942 machine.go:96] duration metric: took 863.584969ms to provisionDockerMachine
	I1028 12:15:33.101883  185942 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:15:33.101896  185942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:33.101911  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.102249  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:33.102285  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.105023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.105357  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105493  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.105710  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.105881  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.106032  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.193225  185942 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:33.197548  185942 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:33.197570  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:33.197637  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:33.197739  185942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:33.197861  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:33.207962  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:33.231808  185942 start.go:296] duration metric: took 129.908529ms for postStartSetup
	I1028 12:15:33.231853  185942 fix.go:56] duration metric: took 19.405472942s for fixHost
	I1028 12:15:33.231875  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.234609  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.234943  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.234979  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.235167  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.235370  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235642  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.235806  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:33.236026  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:33.236041  185942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:33.350639  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117733.322211717
	
	I1028 12:15:33.350663  185942 fix.go:216] guest clock: 1730117733.322211717
	I1028 12:15:33.350673  185942 fix.go:229] Guest: 2024-10-28 12:15:33.322211717 +0000 UTC Remote: 2024-10-28 12:15:33.231858201 +0000 UTC m=+237.345598419 (delta=90.353516ms)
	I1028 12:15:33.350707  185942 fix.go:200] guest clock delta is within tolerance: 90.353516ms
	I1028 12:15:33.350714  185942 start.go:83] releasing machines lock for "embed-certs-709250", held for 19.524379046s
	I1028 12:15:33.350737  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.350974  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:33.353647  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354012  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.354041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354244  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354690  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354873  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354973  185942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:33.355017  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.355090  185942 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:33.355116  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.357679  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358050  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358074  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358242  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358389  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.358542  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.358584  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358616  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358681  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.358721  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358892  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.359048  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.359197  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.443468  185942 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:33.498501  185942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:33.642221  185942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:33.649269  185942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:33.649336  185942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:33.665990  185942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:33.666023  185942 start.go:495] detecting cgroup driver to use...
	I1028 12:15:33.666103  185942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:33.683188  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:33.699441  185942 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:33.699506  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:33.714192  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:33.728325  185942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:33.850801  185942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:34.028929  185942 docker.go:233] disabling docker service ...
	I1028 12:15:34.028991  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:34.045600  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:34.059450  185942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:34.182310  185942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:34.305346  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:34.322354  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:34.342738  185942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:15:34.342804  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.354622  185942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:34.354687  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.365663  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.376503  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.388360  185942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:34.399960  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.419087  185942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.439700  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.451425  185942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:34.461657  185942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:34.461710  185942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:34.476292  185942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:34.487186  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:34.614984  185942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:34.709983  185942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:34.710061  185942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:34.715204  185942 start.go:563] Will wait 60s for crictl version
	I1028 12:15:34.715268  185942 ssh_runner.go:195] Run: which crictl
	I1028 12:15:34.719459  185942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:34.760330  185942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:34.760407  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.788635  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.820113  185942 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:15:34.821282  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:34.824384  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.824719  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:34.824746  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.825032  185942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:34.829502  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:34.842695  185942 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:34.842845  185942 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:15:34.842897  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:34.881154  185942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:15:34.881218  185942 ssh_runner.go:195] Run: which lz4
	I1028 12:15:34.885630  185942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:34.890045  185942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:34.890075  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:36.448704  185942 crio.go:462] duration metric: took 1.563096609s to copy over tarball
	I1028 12:15:36.448792  185942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:38.703177  185942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25435315s)
	I1028 12:15:38.703207  185942 crio.go:469] duration metric: took 2.254465841s to extract the tarball
	I1028 12:15:38.703217  185942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:38.741005  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:38.788350  185942 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:15:38.788376  185942 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:15:38.788383  185942 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:15:38.788491  185942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:15:38.788558  185942 ssh_runner.go:195] Run: crio config
	I1028 12:15:38.835642  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:38.835667  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:38.835678  185942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:15:38.835701  185942 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:15:38.835822  185942 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:15:38.835879  185942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:15:38.846832  185942 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:15:38.846925  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:15:38.857103  185942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:15:38.874531  185942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:15:38.892213  185942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:15:38.910949  185942 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:15:38.915391  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:38.928874  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:39.045969  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:15:39.063425  185942 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:15:39.063449  185942 certs.go:194] generating shared ca certs ...
	I1028 12:15:39.063465  185942 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:15:39.063638  185942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:15:39.063693  185942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:15:39.063709  185942 certs.go:256] generating profile certs ...
	I1028 12:15:39.063810  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:15:39.063893  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:15:39.063951  185942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:15:39.064107  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:15:39.064153  185942 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:15:39.064167  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:15:39.064202  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:15:39.064239  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:15:39.064272  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:15:39.064335  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:39.064972  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:15:39.103261  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:15:39.145102  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:15:39.175151  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:15:39.205220  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:15:39.236045  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:15:39.273622  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:15:39.299336  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:15:39.325277  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:15:39.349878  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:15:39.374466  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:15:39.398920  185942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:15:39.416280  185942 ssh_runner.go:195] Run: openssl version
	I1028 12:15:39.422478  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:15:39.434671  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439581  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439635  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.445736  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:15:39.457128  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:15:39.468602  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473229  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473306  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.479063  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:15:39.490370  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:15:39.501843  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506514  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506579  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.512633  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:15:39.524115  185942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:15:39.528804  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:15:39.534982  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:15:39.541214  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:15:39.547734  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:15:39.554143  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:15:39.560719  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:15:39.567076  185942 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:15:39.567173  185942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:15:39.567226  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.611567  185942 cri.go:89] found id: ""
	I1028 12:15:39.611644  185942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:15:39.622561  185942 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:15:39.622583  185942 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:15:39.622637  185942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:15:39.632757  185942 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:15:39.633873  185942 kubeconfig.go:125] found "embed-certs-709250" server: "https://192.168.39.211:8443"
	I1028 12:15:39.635943  185942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:15:39.646060  185942 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1028 12:15:39.646104  185942 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:15:39.646119  185942 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:15:39.646177  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.686806  185942 cri.go:89] found id: ""
	I1028 12:15:39.686891  185942 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:15:39.703935  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:15:39.714319  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:15:39.714341  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:15:39.714389  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:15:39.725383  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:15:39.725452  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:15:39.737075  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:15:39.748226  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:15:39.748311  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:15:39.760111  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.770287  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:15:39.770365  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.780776  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:15:39.790412  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:15:39.790475  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:15:39.800727  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:15:39.811331  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:39.926791  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:41.043020  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11617977s)
	I1028 12:15:41.043082  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.246311  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.309073  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.392313  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:15:41.392425  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:41.893601  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.393518  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.444753  185942 api_server.go:72] duration metric: took 1.052438751s to wait for apiserver process to appear ...
	I1028 12:15:42.444794  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:15:42.444821  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.214786  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.214821  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.214837  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.252422  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.252458  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.445825  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.451454  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.451549  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:45.945668  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.956623  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.956667  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.445240  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.450197  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:46.450223  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.945901  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.950302  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:15:46.956218  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:15:46.956245  185942 api_server.go:131] duration metric: took 4.511443878s to wait for apiserver health ...
	I1028 12:15:46.956254  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:46.956260  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:46.958294  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:46.959738  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:15:46.970473  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:15:46.994129  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:15:47.003807  185942 system_pods.go:59] 8 kube-system pods found
	I1028 12:15:47.003843  185942 system_pods.go:61] "coredns-7c65d6cfc9-j66cd" [d53b2839-00f6-4ccc-833d-76424b3efdba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:15:47.003851  185942 system_pods.go:61] "etcd-embed-certs-709250" [24761127-dde4-4f5d-b7cf-a13e37366e0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:15:47.003858  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [17996153-32c3-41e0-be90-fc9e058e0080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:15:47.003864  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [4ce37c00-1015-45f8-b847-1ca92cdf3a31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:15:47.003871  185942 system_pods.go:61] "kube-proxy-dl7xq" [a06ed5ff-b1e9-42c7-ba26-f120bb03ccb6] Running
	I1028 12:15:47.003877  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [c76e654e-a7fc-4891-8e73-bd74f9178c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:15:47.003883  185942 system_pods.go:61] "metrics-server-6867b74b74-k69kz" [568d5308-3f66-459b-b5c8-594d9400b6c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:15:47.003886  185942 system_pods.go:61] "storage-provisioner" [6552cef1-21b6-4306-a3e2-ff16793257dc] Running
	I1028 12:15:47.003893  185942 system_pods.go:74] duration metric: took 9.734271ms to wait for pod list to return data ...
	I1028 12:15:47.003900  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:15:47.008428  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:15:47.008465  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:15:47.008479  185942 node_conditions.go:105] duration metric: took 4.573275ms to run NodePressure ...
	I1028 12:15:47.008504  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:47.285509  185942 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291045  185942 kubeadm.go:739] kubelet initialised
	I1028 12:15:47.291069  185942 kubeadm.go:740] duration metric: took 5.521713ms waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291078  185942 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:15:47.299072  185942 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:49.312365  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:50.804953  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:50.804976  185942 pod_ready.go:82] duration metric: took 3.505873868s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:50.804986  185942 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378422  186547 start.go:364] duration metric: took 2m50.816229865s to acquireMachinesLock for "default-k8s-diff-port-349222"
	I1028 12:15:53.378482  186547 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:53.378491  186547 fix.go:54] fixHost starting: 
	I1028 12:15:53.378917  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:53.378971  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:53.395967  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I1028 12:15:53.396434  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:53.396923  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:15:53.396950  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:53.397332  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:53.397552  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:15:53.397726  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:15:53.399287  186547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349222: state=Stopped err=<nil>
	I1028 12:15:53.399337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	W1028 12:15:53.399468  186547 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:53.401446  186547 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-349222" ...
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:52.838774  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.811974  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:53.811997  185942 pod_ready.go:82] duration metric: took 3.00700476s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:53.812008  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:55.824400  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.402920  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Start
	I1028 12:15:53.403172  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring networks are active...
	I1028 12:15:53.403912  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network default is active
	I1028 12:15:53.404195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network mk-default-k8s-diff-port-349222 is active
	I1028 12:15:53.404615  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Getting domain xml...
	I1028 12:15:53.405554  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Creating domain...
	I1028 12:15:54.734540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting to get IP...
	I1028 12:15:54.735417  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735784  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735880  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:54.735759  187305 retry.go:31] will retry after 268.036011ms: waiting for machine to come up
	I1028 12:15:55.005376  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.005999  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.006032  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.005930  187305 retry.go:31] will retry after 255.477665ms: waiting for machine to come up
	I1028 12:15:55.263500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264118  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264153  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.264087  187305 retry.go:31] will retry after 354.942061ms: waiting for machine to come up
	I1028 12:15:55.620877  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621664  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621698  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.621610  187305 retry.go:31] will retry after 569.620755ms: waiting for machine to come up
	I1028 12:15:56.192393  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192872  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.192803  187305 retry.go:31] will retry after 703.637263ms: waiting for machine to come up
	I1028 12:15:56.897762  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898304  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.898214  187305 retry.go:31] will retry after 713.628482ms: waiting for machine to come up
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:58.321600  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:00.018244  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.018285  185942 pod_ready.go:82] duration metric: took 6.206271068s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.018297  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028610  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.028638  185942 pod_ready.go:82] duration metric: took 10.334289ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028653  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041057  185942 pod_ready.go:93] pod "kube-proxy-dl7xq" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.041091  185942 pod_ready.go:82] duration metric: took 12.429027ms for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041106  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049617  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.049645  185942 pod_ready.go:82] duration metric: took 8.529436ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049659  185942 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:57.613338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613844  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613873  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:57.613796  187305 retry.go:31] will retry after 1.188479203s: waiting for machine to come up
	I1028 12:15:58.803300  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803690  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803724  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:58.803650  187305 retry.go:31] will retry after 1.439057212s: waiting for machine to come up
	I1028 12:16:00.244665  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245201  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245239  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:00.245141  187305 retry.go:31] will retry after 1.842038011s: waiting for machine to come up
	I1028 12:16:02.090283  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090879  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:02.090828  187305 retry.go:31] will retry after 1.556155538s: waiting for machine to come up
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.165378  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:04.555857  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:03.648337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648760  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:03.648736  187305 retry.go:31] will retry after 2.586516153s: waiting for machine to come up
	I1028 12:16:06.236934  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237402  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237433  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:06.237352  187305 retry.go:31] will retry after 3.507901898s: waiting for machine to come up
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.557581  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.056276  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.746980  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747449  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747482  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:09.747401  187305 retry.go:31] will retry after 4.499585546s: waiting for machine to come up
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.487114  185546 start.go:364] duration metric: took 56.6590668s to acquireMachinesLock for "no-preload-871884"
	I1028 12:16:15.487176  185546 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:16:15.487190  185546 fix.go:54] fixHost starting: 
	I1028 12:16:15.487650  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:16:15.487713  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:16:15.508857  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1028 12:16:15.509318  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:16:15.510000  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:16:15.510037  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:16:15.510385  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:16:15.510599  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:15.510779  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:16:15.512738  185546 fix.go:112] recreateIfNeeded on no-preload-871884: state=Stopped err=<nil>
	I1028 12:16:15.512772  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	W1028 12:16:15.512963  185546 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:16:15.514890  185546 out.go:177] * Restarting existing kvm2 VM for "no-preload-871884" ...
	I1028 12:16:11.056427  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:13.058549  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.556621  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.516551  185546 main.go:141] libmachine: (no-preload-871884) Calling .Start
	I1028 12:16:15.516786  185546 main.go:141] libmachine: (no-preload-871884) Ensuring networks are active...
	I1028 12:16:15.517934  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network default is active
	I1028 12:16:15.518543  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network mk-no-preload-871884 is active
	I1028 12:16:15.519028  185546 main.go:141] libmachine: (no-preload-871884) Getting domain xml...
	I1028 12:16:15.519878  185546 main.go:141] libmachine: (no-preload-871884) Creating domain...
	I1028 12:16:14.249128  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249645  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has current primary IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249674  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Found IP for machine: 192.168.50.75
	I1028 12:16:14.249689  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserving static IP address...
	I1028 12:16:14.250120  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserved static IP address: 192.168.50.75
	I1028 12:16:14.250139  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for SSH to be available...
	I1028 12:16:14.250164  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.250205  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | skip adding static IP to network mk-default-k8s-diff-port-349222 - found existing host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"}
	I1028 12:16:14.250222  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Getting to WaitForSSH function...
	I1028 12:16:14.252540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.252883  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.252926  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.253035  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH client type: external
	I1028 12:16:14.253075  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa (-rw-------)
	I1028 12:16:14.253100  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:14.253115  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | About to run SSH command:
	I1028 12:16:14.253129  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | exit 0
	I1028 12:16:14.373688  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:14.374101  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetConfigRaw
	I1028 12:16:14.374713  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.377338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.377824  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.377857  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.378094  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:16:14.378326  186547 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:14.378345  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:14.378556  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.380694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.380976  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.380992  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.381143  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.381356  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381521  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381678  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.381882  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.382107  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.382119  186547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:14.490030  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:14.490061  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490303  186547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-349222"
	I1028 12:16:14.490331  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490523  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.492989  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493395  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.493426  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493626  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.493794  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.493960  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.494104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.494258  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.494427  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.494439  186547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-349222 && echo "default-k8s-diff-port-349222" | sudo tee /etc/hostname
	I1028 12:16:14.604373  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-349222
	
	I1028 12:16:14.604405  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.607135  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607437  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.607465  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.607891  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608060  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608187  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.608353  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.608549  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.608569  186547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-349222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-349222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-349222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:14.714933  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:14.714963  186547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:14.714990  186547 buildroot.go:174] setting up certificates
	I1028 12:16:14.714998  186547 provision.go:84] configureAuth start
	I1028 12:16:14.715007  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.715321  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.718051  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.718406  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718504  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.720638  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.720945  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.720972  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.721127  186547 provision.go:143] copyHostCerts
	I1028 12:16:14.721198  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:14.721213  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:14.721283  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:14.721407  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:14.721417  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:14.721446  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:14.721522  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:14.721544  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:14.721571  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:14.721634  186547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-349222 san=[127.0.0.1 192.168.50.75 default-k8s-diff-port-349222 localhost minikube]
	I1028 12:16:14.854227  186547 provision.go:177] copyRemoteCerts
	I1028 12:16:14.854285  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:14.854314  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.857250  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857590  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.857620  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857897  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.858091  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.858286  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.858434  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:14.940752  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:14.967575  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 12:16:14.992693  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:16:15.017801  186547 provision.go:87] duration metric: took 302.790563ms to configureAuth
	I1028 12:16:15.017831  186547 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:15.018073  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:15.018168  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.021181  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.021574  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021719  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.021894  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022113  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022317  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.022564  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.022744  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.022761  186547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:15.257308  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:15.257339  186547 machine.go:96] duration metric: took 878.998573ms to provisionDockerMachine
	I1028 12:16:15.257350  186547 start.go:293] postStartSetup for "default-k8s-diff-port-349222" (driver="kvm2")
	I1028 12:16:15.257360  186547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:15.257378  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.257695  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:15.257721  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.260380  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260767  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.260795  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260990  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.261186  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.261370  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.261513  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.341376  186547 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:15.345736  186547 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:15.345760  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:15.345820  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:15.345891  186547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:15.345978  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:15.355662  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:15.381750  186547 start.go:296] duration metric: took 124.385481ms for postStartSetup
	I1028 12:16:15.381788  186547 fix.go:56] duration metric: took 22.00329785s for fixHost
	I1028 12:16:15.381807  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.384756  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385099  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.385130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385359  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.385587  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385782  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385974  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.386165  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.386345  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.386355  186547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:15.486905  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117775.445749296
	
	I1028 12:16:15.486934  186547 fix.go:216] guest clock: 1730117775.445749296
	I1028 12:16:15.486944  186547 fix.go:229] Guest: 2024-10-28 12:16:15.445749296 +0000 UTC Remote: 2024-10-28 12:16:15.381791731 +0000 UTC m=+192.967058764 (delta=63.957565ms)
	I1028 12:16:15.487005  186547 fix.go:200] guest clock delta is within tolerance: 63.957565ms
	I1028 12:16:15.487018  186547 start.go:83] releasing machines lock for "default-k8s-diff-port-349222", held for 22.108560462s
	I1028 12:16:15.487082  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.487382  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:15.490840  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491343  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.491374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491528  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492208  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492431  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492581  186547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:15.492657  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.492706  186547 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:15.492746  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.496062  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496119  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496544  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496901  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497225  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497257  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497458  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497583  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497665  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.497798  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497977  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.590741  186547 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:15.615347  186547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:15.762979  186547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:15.770132  186547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:15.770221  186547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:15.788651  186547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:15.788684  186547 start.go:495] detecting cgroup driver to use...
	I1028 12:16:15.788751  186547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:15.806118  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:15.820916  186547 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:15.820986  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:15.835770  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:15.850994  186547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:15.979465  186547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:16.160837  186547 docker.go:233] disabling docker service ...
	I1028 12:16:16.160924  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:16.177934  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:16.194616  186547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:16.320605  186547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:16.464175  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:16.479626  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:16.502747  186547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:16.502889  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.514636  186547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:16.514695  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.528137  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.539961  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.552263  186547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:16.566275  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.578632  186547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.599084  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.611250  186547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:16.621976  186547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:16.622052  186547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:16.640800  186547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:16.651767  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:16.806628  186547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:16.903584  186547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:16.903655  186547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:16.909873  186547 start.go:563] Will wait 60s for crictl version
	I1028 12:16:16.909950  186547 ssh_runner.go:195] Run: which crictl
	I1028 12:16:16.915388  186547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:16.964424  186547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:16.964517  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:16.997415  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:17.032323  186547 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:17.033747  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:17.036500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.036903  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:17.036935  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.037118  186547 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:17.041698  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:17.056649  186547 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:17.056792  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:17.056840  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:17.099143  186547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:17.099233  186547 ssh_runner.go:195] Run: which lz4
	I1028 12:16:17.103882  186547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:16:17.108660  186547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:16:17.108699  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.559178  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:19.560739  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:16.842086  185546 main.go:141] libmachine: (no-preload-871884) Waiting to get IP...
	I1028 12:16:16.843056  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:16.843514  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:16.843599  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:16.843484  187500 retry.go:31] will retry after 240.188984ms: waiting for machine to come up
	I1028 12:16:17.085193  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.085702  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.085739  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.085649  187500 retry.go:31] will retry after 361.44193ms: waiting for machine to come up
	I1028 12:16:17.448961  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.449619  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.449645  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.449576  187500 retry.go:31] will retry after 386.179326ms: waiting for machine to come up
	I1028 12:16:17.837097  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.837879  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.837907  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.837834  187500 retry.go:31] will retry after 531.12665ms: waiting for machine to come up
	I1028 12:16:18.370266  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:18.370803  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:18.370834  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:18.370746  187500 retry.go:31] will retry after 760.20134ms: waiting for machine to come up
	I1028 12:16:19.132853  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.133415  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.133444  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.133360  187500 retry.go:31] will retry after 817.773678ms: waiting for machine to come up
	I1028 12:16:19.952317  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.952800  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.952824  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.952760  187500 retry.go:31] will retry after 861.798605ms: waiting for machine to come up
	I1028 12:16:20.816156  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:20.816794  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:20.816826  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:20.816750  187500 retry.go:31] will retry after 908.062214ms: waiting for machine to come up
	I1028 12:16:18.686980  186547 crio.go:462] duration metric: took 1.583134893s to copy over tarball
	I1028 12:16:18.687053  186547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:16:21.016264  186547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329174428s)
	I1028 12:16:21.016309  186547 crio.go:469] duration metric: took 2.329300291s to extract the tarball
	I1028 12:16:21.016322  186547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:16:21.053950  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:21.112876  186547 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:16:21.112903  186547 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:16:21.112914  186547 kubeadm.go:934] updating node { 192.168.50.75 8444 v1.31.2 crio true true} ...
	I1028 12:16:21.113037  186547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-349222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:21.113119  186547 ssh_runner.go:195] Run: crio config
	I1028 12:16:21.179853  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:21.179877  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:21.179888  186547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:21.179907  186547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-349222 NodeName:default-k8s-diff-port-349222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:21.180039  186547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-349222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.75"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:21.180117  186547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:21.191650  186547 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:21.191721  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:21.201670  186547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 12:16:21.220426  186547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:21.240774  186547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:16:21.263336  186547 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:21.267818  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:21.281577  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:21.441517  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:21.464117  186547 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222 for IP: 192.168.50.75
	I1028 12:16:21.464145  186547 certs.go:194] generating shared ca certs ...
	I1028 12:16:21.464167  186547 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:21.464392  186547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:21.464458  186547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:21.464485  186547 certs.go:256] generating profile certs ...
	I1028 12:16:21.464599  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.key
	I1028 12:16:21.464691  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key.e54e33e0
	I1028 12:16:21.464749  186547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key
	I1028 12:16:21.464919  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:21.464967  186547 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:21.464981  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:21.465006  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:21.465031  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:21.465069  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:21.465124  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:21.465976  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:21.511145  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:21.572071  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:21.613442  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:21.655508  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:16:21.687378  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:16:21.713227  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:21.738909  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:21.765274  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:21.792427  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:21.817632  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:21.842996  186547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:21.861059  186547 ssh_runner.go:195] Run: openssl version
	I1028 12:16:21.867814  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:21.880769  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886245  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886325  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.893179  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:21.908974  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:21.926992  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932350  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932428  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.939073  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:21.952302  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:21.965485  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971486  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971564  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.978531  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:21.995399  186547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:22.001453  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:22.009449  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:22.016898  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:22.024410  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:22.033151  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:22.040981  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:22.048298  186547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:22.048441  186547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:22.048531  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.095210  186547 cri.go:89] found id: ""
	I1028 12:16:22.095319  186547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:22.111740  186547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:22.111772  186547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:22.111828  186547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:22.122472  186547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:22.123648  186547 kubeconfig.go:125] found "default-k8s-diff-port-349222" server: "https://192.168.50.75:8444"
	I1028 12:16:22.126117  186547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:22.137057  186547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.75
	I1028 12:16:22.137096  186547 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:22.137108  186547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:22.137179  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.180526  186547 cri.go:89] found id: ""
	I1028 12:16:22.180638  186547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:22.197697  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:22.208176  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:22.208197  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:22.208246  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:16:22.218379  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:22.218438  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:22.228844  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:16:22.239330  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:22.239407  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:22.250200  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.260309  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:22.260374  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.271041  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:16:22.281556  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:22.281637  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:22.294003  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:22.305123  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:22.426791  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.058068  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:24.059822  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:21.726767  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:21.727332  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:21.727373  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:21.727224  187500 retry.go:31] will retry after 1.684184533s: waiting for machine to come up
	I1028 12:16:23.412691  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:23.413228  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:23.413254  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:23.413177  187500 retry.go:31] will retry after 1.416062445s: waiting for machine to come up
	I1028 12:16:24.830846  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:24.831450  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:24.831480  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:24.831393  187500 retry.go:31] will retry after 2.716897952s: waiting for machine to come up
	I1028 12:16:23.288371  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.506229  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.575063  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.644776  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:23.644896  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.145579  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.645050  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.666456  186547 api_server.go:72] duration metric: took 1.021679294s to wait for apiserver process to appear ...
	I1028 12:16:24.666493  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:24.666518  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:24.667086  186547 api_server.go:269] stopped: https://192.168.50.75:8444/healthz: Get "https://192.168.50.75:8444/healthz": dial tcp 192.168.50.75:8444: connect: connection refused
	I1028 12:16:25.166765  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.336957  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.337000  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.337015  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.382075  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.382110  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.667083  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.671910  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:28.671935  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.167591  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.173364  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:29.173397  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.666902  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.672205  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:16:29.679964  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:16:29.680002  186547 api_server.go:131] duration metric: took 5.013500479s to wait for apiserver health ...
	I1028 12:16:29.680014  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:29.680032  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:29.681992  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:26.558629  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.560116  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:27.550893  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:27.551454  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:27.551476  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:27.551438  187500 retry.go:31] will retry after 2.986712877s: waiting for machine to come up
	I1028 12:16:30.539999  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:30.540601  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:30.540632  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:30.540526  187500 retry.go:31] will retry after 3.947007446s: waiting for machine to come up
	I1028 12:16:29.683325  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:16:29.697362  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:16:29.717296  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:16:29.726327  186547 system_pods.go:59] 8 kube-system pods found
	I1028 12:16:29.726363  186547 system_pods.go:61] "coredns-7c65d6cfc9-k5h7n" [e203fcce-1a8a-431b-a816-d75b33ca9417] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:16:29.726374  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [2214daee-0302-44cd-9297-836eeb011232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:16:29.726391  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [c4331c24-07e2-4b50-ab04-31bcd00960e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:16:29.726402  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [9dddd9fb-ad03-4771-af1b-d9e1e024af52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:16:29.726413  186547 system_pods.go:61] "kube-proxy-bqq65" [ed5d0c14-0ddb-4446-a2f7-ae457d629fb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:16:29.726423  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [9cfcc366-038f-43a9-b919-48742fa419af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:16:29.726434  186547 system_pods.go:61] "metrics-server-6867b74b74-cgkz9" [3d919412-efb8-4030-a5d0-3c325c824c48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:16:29.726445  186547 system_pods.go:61] "storage-provisioner" [613b003c-1eee-4294-947f-ea7a21edc8d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:16:29.726464  186547 system_pods.go:74] duration metric: took 9.135782ms to wait for pod list to return data ...
	I1028 12:16:29.726478  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:16:29.729971  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:16:29.729996  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:16:29.730009  186547 node_conditions.go:105] duration metric: took 3.525858ms to run NodePressure ...
	I1028 12:16:29.730035  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:30.043775  186547 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048614  186547 kubeadm.go:739] kubelet initialised
	I1028 12:16:30.048638  186547 kubeadm.go:740] duration metric: took 4.83853ms waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048647  186547 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:16:30.053908  186547 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:32.063283  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.057577  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:33.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:35.557338  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:34.491658  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492175  185546 main.go:141] libmachine: (no-preload-871884) Found IP for machine: 192.168.72.156
	I1028 12:16:34.492197  185546 main.go:141] libmachine: (no-preload-871884) Reserving static IP address...
	I1028 12:16:34.492215  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has current primary IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492674  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.492704  185546 main.go:141] libmachine: (no-preload-871884) Reserved static IP address: 192.168.72.156
	I1028 12:16:34.492739  185546 main.go:141] libmachine: (no-preload-871884) DBG | skip adding static IP to network mk-no-preload-871884 - found existing host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"}
	I1028 12:16:34.492763  185546 main.go:141] libmachine: (no-preload-871884) DBG | Getting to WaitForSSH function...
	I1028 12:16:34.492777  185546 main.go:141] libmachine: (no-preload-871884) Waiting for SSH to be available...
	I1028 12:16:34.495176  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495502  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.495536  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495682  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH client type: external
	I1028 12:16:34.495714  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa (-rw-------)
	I1028 12:16:34.495747  185546 main.go:141] libmachine: (no-preload-871884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:34.495770  185546 main.go:141] libmachine: (no-preload-871884) DBG | About to run SSH command:
	I1028 12:16:34.495796  185546 main.go:141] libmachine: (no-preload-871884) DBG | exit 0
	I1028 12:16:34.625650  185546 main.go:141] libmachine: (no-preload-871884) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:34.625959  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetConfigRaw
	I1028 12:16:34.626602  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.629137  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629442  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.629477  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629733  185546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:16:34.629938  185546 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:34.629957  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:34.630153  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.632415  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.632777  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.632804  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.633033  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.633247  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633422  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633592  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.633762  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.633954  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.633968  185546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:34.738368  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:34.738406  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738696  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:16:34.738729  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738926  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.741750  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742216  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.742322  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742339  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.742538  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742689  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742857  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.743032  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.743248  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.743266  185546 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-871884 && echo "no-preload-871884" | sudo tee /etc/hostname
	I1028 12:16:34.863767  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-871884
	
	I1028 12:16:34.863802  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.867136  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867530  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.867561  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867822  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.868039  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868251  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868430  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.868634  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.868880  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.868905  185546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-871884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-871884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-871884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:34.989420  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:34.989450  185546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:34.989468  185546 buildroot.go:174] setting up certificates
	I1028 12:16:34.989476  185546 provision.go:84] configureAuth start
	I1028 12:16:34.989485  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.989790  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.992627  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.992977  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.993007  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.993225  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.995586  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.995888  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.995911  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.996122  185546 provision.go:143] copyHostCerts
	I1028 12:16:34.996190  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:34.996204  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:34.996261  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:34.996375  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:34.996384  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:34.996408  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:34.996472  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:34.996482  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:34.996499  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:34.996559  185546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.no-preload-871884 san=[127.0.0.1 192.168.72.156 localhost minikube no-preload-871884]
	I1028 12:16:35.437900  185546 provision.go:177] copyRemoteCerts
	I1028 12:16:35.437961  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:35.437985  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.440936  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441329  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.441361  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441555  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.441756  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.441921  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.442085  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.524911  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:35.554631  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:16:35.586946  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:16:35.620121  185546 provision.go:87] duration metric: took 630.630531ms to configureAuth
	I1028 12:16:35.620155  185546 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:35.620395  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:35.620502  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.623316  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623607  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.623643  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623886  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.624099  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624290  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624433  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.624612  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:35.624794  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:35.624810  185546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:35.886145  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:35.886178  185546 machine.go:96] duration metric: took 1.256224912s to provisionDockerMachine
	I1028 12:16:35.886196  185546 start.go:293] postStartSetup for "no-preload-871884" (driver="kvm2")
	I1028 12:16:35.886209  185546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:35.886232  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:35.886615  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:35.886653  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.889615  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890016  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.890048  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.890459  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.890654  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.890798  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.977889  185546 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:35.983360  185546 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:35.983387  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:35.983454  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:35.983543  185546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:35.983674  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:35.997400  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:36.025665  185546 start.go:296] duration metric: took 139.454088ms for postStartSetup
	I1028 12:16:36.025714  185546 fix.go:56] duration metric: took 20.538525254s for fixHost
	I1028 12:16:36.025739  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.028490  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.028933  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.028964  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.029170  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.029386  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029573  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029734  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.029909  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:36.030087  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:36.030098  185546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:36.138559  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117796.101397993
	
	I1028 12:16:36.138589  185546 fix.go:216] guest clock: 1730117796.101397993
	I1028 12:16:36.138599  185546 fix.go:229] Guest: 2024-10-28 12:16:36.101397993 +0000 UTC Remote: 2024-10-28 12:16:36.025719388 +0000 UTC m=+359.787107454 (delta=75.678605ms)
	I1028 12:16:36.138633  185546 fix.go:200] guest clock delta is within tolerance: 75.678605ms
	I1028 12:16:36.138638  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 20.651488254s
	I1028 12:16:36.138663  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.138953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:36.141711  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142144  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.142180  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142323  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.142975  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143165  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143240  185546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:36.143306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.143378  185546 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:36.143399  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.145980  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146166  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146348  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146375  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146507  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146617  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146657  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146701  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.146795  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146882  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.146953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.147013  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.147071  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.147202  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.223364  185546 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:36.246964  185546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:34.561016  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.564296  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.396734  185546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:36.403214  185546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:36.403298  185546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:36.421658  185546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:36.421695  185546 start.go:495] detecting cgroup driver to use...
	I1028 12:16:36.421772  185546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:36.441133  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:36.456750  185546 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:36.456806  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:36.473457  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:36.489210  185546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:36.621054  185546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:36.767341  185546 docker.go:233] disabling docker service ...
	I1028 12:16:36.767432  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:36.784655  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:36.799522  185546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:36.942312  185546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:37.066636  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:37.082284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:37.102462  185546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:37.102530  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.113687  185546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:37.113760  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.125624  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.137036  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.148417  185546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:37.160015  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.171382  185546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.192342  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.204353  185546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:37.215188  185546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:37.215275  185546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:37.230653  185546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:37.241484  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:37.382996  185546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:37.479263  185546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:37.479363  185546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:37.485265  185546 start.go:563] Will wait 60s for crictl version
	I1028 12:16:37.485330  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:37.489545  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:37.536126  185546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:37.536212  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.567538  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.600370  185546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.559329  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:40.057624  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:37.601686  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:37.604235  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604568  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:37.604601  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604782  185546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:37.609354  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:37.624966  185546 kubeadm.go:883] updating cluster {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:37.625081  185546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:37.625117  185546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:37.664112  185546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:37.664149  185546 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:16:37.664262  185546 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.664306  185546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.664334  185546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 12:16:37.664311  185546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.664352  185546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.664393  185546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.664434  185546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.664399  185546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666080  185546 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.666083  185546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.666081  185546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.666142  185546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.666085  185546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 12:16:37.666079  185546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.666185  185546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666398  185546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.840639  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.857089  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.859107  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 12:16:37.859358  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.863640  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.867925  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.876221  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.921581  185546 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 12:16:37.921638  185546 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.921689  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.042970  185546 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 12:16:38.043015  185546 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.043068  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093917  185546 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 12:16:38.093954  185546 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 12:16:38.093973  185546 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.093985  185546 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.094029  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094038  185546 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 12:16:38.094057  185546 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.094087  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.094094  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094030  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093976  185546 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 12:16:38.094143  185546 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.094152  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.094175  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.110134  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.110302  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.188922  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.188979  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.193920  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.193929  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.292698  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.325562  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.331855  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.332873  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.345880  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.345951  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.414842  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.470776  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.470949  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 12:16:38.471044  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.481197  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 12:16:38.481333  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:38.503147  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 12:16:38.503171  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:38.532884  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 12:16:38.533001  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:38.552405  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 12:16:38.552417  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 12:16:38.552472  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552485  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 12:16:38.552523  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:38.552529  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552552  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 12:16:38.552527  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 12:16:38.552598  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 12:16:38.829851  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127678  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.575124569s)
	I1028 12:16:41.127722  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 12:16:41.127744  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.575188461s)
	I1028 12:16:41.127775  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 12:16:41.127785  185546 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.297902587s)
	I1028 12:16:41.127803  185546 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127818  185546 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 12:16:41.127850  185546 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127858  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127895  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:39.064564  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:41.563643  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.058025  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:44.557594  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.190694  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062807881s)
	I1028 12:16:43.190736  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 12:16:43.190752  185546 ssh_runner.go:235] Completed: which crictl: (2.062836368s)
	I1028 12:16:43.190773  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:43.190827  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:43.190831  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:45.281583  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.090685426s)
	I1028 12:16:45.281620  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 12:16:45.281650  185546 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281679  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.090821035s)
	I1028 12:16:45.281698  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281750  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:45.325500  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:42.565395  186547 pod_ready.go:93] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.565425  186547 pod_ready.go:82] duration metric: took 12.511487215s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.565438  186547 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572364  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.572388  186547 pod_ready.go:82] duration metric: took 6.941356ms for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572402  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579074  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.579099  186547 pod_ready.go:82] duration metric: took 6.689137ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579116  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584088  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.584108  186547 pod_ready.go:82] duration metric: took 4.985095ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584118  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588810  186547 pod_ready.go:93] pod "kube-proxy-bqq65" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.588837  186547 pod_ready.go:82] duration metric: took 4.711896ms for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588849  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758349  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:43.758376  186547 pod_ready.go:82] duration metric: took 1.169519383s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758387  186547 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:45.766209  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.059150  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.556589  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.174287  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.84875195s)
	I1028 12:16:49.174340  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 12:16:49.174291  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.892568087s)
	I1028 12:16:49.174422  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 12:16:49.174427  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:49.174466  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:49.174524  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:48.265641  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:50.271513  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.557320  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.557540  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:51.438821  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.26426785s)
	I1028 12:16:51.438857  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 12:16:51.438890  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264449757s)
	I1028 12:16:51.438893  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:51.438911  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 12:16:51.438945  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:52.890902  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451935078s)
	I1028 12:16:52.890933  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 12:16:52.890960  185546 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:52.891010  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:53.643145  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 12:16:53.643208  185546 cache_images.go:123] Successfully loaded all cached images
	I1028 12:16:53.643216  185546 cache_images.go:92] duration metric: took 15.979050279s to LoadCachedImages
	I1028 12:16:53.643231  185546 kubeadm.go:934] updating node { 192.168.72.156 8443 v1.31.2 crio true true} ...
	I1028 12:16:53.643393  185546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-871884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:53.643480  185546 ssh_runner.go:195] Run: crio config
	I1028 12:16:53.701778  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:16:53.701805  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:53.701814  185546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:53.701836  185546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.156 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-871884 NodeName:no-preload-871884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:53.701952  185546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-871884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:53.702019  185546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:53.714245  185546 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:53.714327  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:53.725610  185546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 12:16:53.745071  185546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:53.766897  185546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:16:53.787043  185546 ssh_runner.go:195] Run: grep 192.168.72.156	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:53.791580  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:53.805088  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:53.945235  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:53.964073  185546 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884 for IP: 192.168.72.156
	I1028 12:16:53.964099  185546 certs.go:194] generating shared ca certs ...
	I1028 12:16:53.964115  185546 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:53.964290  185546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:53.964338  185546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:53.964355  185546 certs.go:256] generating profile certs ...
	I1028 12:16:53.964458  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.key
	I1028 12:16:53.964533  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key.6934b48e
	I1028 12:16:53.964584  185546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key
	I1028 12:16:53.964719  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:53.964750  185546 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:53.964765  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:53.964801  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:53.964831  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:53.964866  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:53.964921  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:53.965632  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:54.004592  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:54.044270  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:54.079496  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:54.114473  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:16:54.141836  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:54.175201  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:54.202282  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:54.227874  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:54.254818  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:54.282950  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:54.310204  185546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:54.328834  185546 ssh_runner.go:195] Run: openssl version
	I1028 12:16:54.335391  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:54.347474  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352687  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352755  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.358834  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:54.373155  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:54.387035  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392179  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392281  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.398488  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:54.412352  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:54.426361  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431415  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431470  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.437583  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:54.450708  185546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:54.456625  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:54.463458  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:54.469939  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:54.477873  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:54.484962  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:54.491679  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:54.498106  185546 kubeadm.go:392] StartCluster: {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:54.498211  185546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:54.498287  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.543142  185546 cri.go:89] found id: ""
	I1028 12:16:54.543250  185546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:54.555948  185546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:54.555971  185546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:54.556021  185546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:54.566954  185546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:54.567990  185546 kubeconfig.go:125] found "no-preload-871884" server: "https://192.168.72.156:8443"
	I1028 12:16:54.570149  185546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:54.581005  185546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.156
	I1028 12:16:54.581039  185546 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:54.581051  185546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:54.581100  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.622676  185546 cri.go:89] found id: ""
	I1028 12:16:54.622742  185546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:54.642427  185546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:54.655104  185546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:54.655131  185546 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:54.655199  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:54.665367  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:54.665432  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:54.675664  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:54.685921  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:54.685997  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:54.698451  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.709982  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:54.710060  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.721243  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:54.731699  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:54.731780  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:54.743365  185546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:54.754284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:54.868055  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.645470  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.858805  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.940632  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:56.020654  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:56.020735  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.764963  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:54.766822  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.768500  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.058614  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.556085  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:00.556460  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.521589  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.021710  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.066266  185546 api_server.go:72] duration metric: took 1.045608096s to wait for apiserver process to appear ...
	I1028 12:16:57.066305  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:57.066326  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:16:57.066862  185546 api_server.go:269] stopped: https://192.168.72.156:8443/healthz: Get "https://192.168.72.156:8443/healthz": dial tcp 192.168.72.156:8443: connect: connection refused
	I1028 12:16:57.567124  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.159147  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.159179  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.159193  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.171505  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.171530  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.566560  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.570920  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:00.570947  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.066537  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.071173  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.071205  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.566517  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.577822  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.577851  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:02.066514  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:02.071117  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:17:02.078265  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:17:02.078293  185546 api_server.go:131] duration metric: took 5.011981306s to wait for apiserver health ...
	I1028 12:17:02.078302  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:17:02.078308  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:17:02.080348  185546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:59.267565  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:01.766399  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.081626  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:17:02.103809  185546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:17:02.135225  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:17:02.152051  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:17:02.152102  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:17:02.152113  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:17:02.152125  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:17:02.152133  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:17:02.152146  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:17:02.152159  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:17:02.152167  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:17:02.152174  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:17:02.152183  185546 system_pods.go:74] duration metric: took 16.930389ms to wait for pod list to return data ...
	I1028 12:17:02.152192  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:17:02.157475  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:17:02.157504  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:17:02.157515  185546 node_conditions.go:105] duration metric: took 5.317861ms to run NodePressure ...
	I1028 12:17:02.157548  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:17:02.476553  185546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482764  185546 kubeadm.go:739] kubelet initialised
	I1028 12:17:02.482789  185546 kubeadm.go:740] duration metric: took 6.205425ms waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482798  185546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:02.487480  185546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.495454  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495482  185546 pod_ready.go:82] duration metric: took 7.976331ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.495495  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495505  185546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.499904  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499931  185546 pod_ready.go:82] duration metric: took 4.41555ms for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.499941  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499948  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.504272  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504300  185546 pod_ready.go:82] duration metric: took 4.345522ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.504325  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504337  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.538786  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538826  185546 pod_ready.go:82] duration metric: took 34.474629ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.538841  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538851  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.939462  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939490  185546 pod_ready.go:82] duration metric: took 400.627739ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.939502  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939511  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.339338  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339369  185546 pod_ready.go:82] duration metric: took 399.848996ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.339384  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339394  185546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.739585  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739640  185546 pod_ready.go:82] duration metric: took 400.235271ms for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.739655  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739665  185546 pod_ready.go:39] duration metric: took 1.256859696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:03.739682  185546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:17:03.755064  185546 ops.go:34] apiserver oom_adj: -16
	I1028 12:17:03.755086  185546 kubeadm.go:597] duration metric: took 9.199108841s to restartPrimaryControlPlane
	I1028 12:17:03.755096  185546 kubeadm.go:394] duration metric: took 9.256999682s to StartCluster
	I1028 12:17:03.755111  185546 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.755175  185546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:17:03.757048  185546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.757327  185546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:17:03.757425  185546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:17:03.757535  185546 addons.go:69] Setting storage-provisioner=true in profile "no-preload-871884"
	I1028 12:17:03.757563  185546 addons.go:234] Setting addon storage-provisioner=true in "no-preload-871884"
	I1028 12:17:03.757565  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:17:03.757589  185546 addons.go:69] Setting metrics-server=true in profile "no-preload-871884"
	I1028 12:17:03.757617  185546 addons.go:234] Setting addon metrics-server=true in "no-preload-871884"
	I1028 12:17:03.757568  185546 addons.go:69] Setting default-storageclass=true in profile "no-preload-871884"
	W1028 12:17:03.757626  185546 addons.go:243] addon metrics-server should already be in state true
	I1028 12:17:03.757635  185546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-871884"
	W1028 12:17:03.757573  185546 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:17:03.757669  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.757713  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.758051  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758093  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758196  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758233  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758231  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758355  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.759378  185546 out.go:177] * Verifying Kubernetes components...
	I1028 12:17:03.761108  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:17:03.786180  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1028 12:17:03.786344  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1028 12:17:03.787005  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787096  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.787658  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.788034  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.789126  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.789149  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.789333  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.789366  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.790199  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.790591  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.793866  185546 addons.go:234] Setting addon default-storageclass=true in "no-preload-871884"
	W1028 12:17:03.793890  185546 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:17:03.793920  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.794332  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.794384  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.806461  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I1028 12:17:03.806960  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.807572  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1028 12:17:03.807644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.807835  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808074  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.808188  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.808349  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.808603  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.808624  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808993  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.809610  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.809665  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.810531  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.812676  185546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:17:03.813307  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1028 12:17:03.813821  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.814228  185546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:03.814248  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:17:03.814266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.814350  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.814373  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.814848  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.815284  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.815323  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.817336  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817751  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.817776  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817889  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.818079  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.818219  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.818357  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.830425  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1028 12:17:03.830940  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.831486  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.831507  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.831905  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.832125  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.834275  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.835260  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1028 12:17:03.835687  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.836180  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.836200  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.836527  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.836604  185546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:17:03.836741  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.838273  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:17:03.838290  185546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:17:03.838306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.838508  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.839044  185546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:03.839060  185546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:17:03.839080  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.842836  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843272  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.843291  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843461  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.843598  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.843767  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.843774  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843909  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.844312  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.844330  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.845228  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.845354  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.845474  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.845623  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.981979  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:17:04.003932  185546 node_ready.go:35] waiting up to 6m0s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:04.071389  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:04.169654  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:04.186781  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:17:04.186808  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:17:04.252889  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:17:04.252921  185546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:17:04.315140  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.315166  185546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:17:04.395995  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.489084  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489122  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489426  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.489445  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489470  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.489481  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489490  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489763  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489781  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.497272  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.497297  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.497647  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.497677  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.497702  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185405  185546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015712456s)
	I1028 12:17:05.185458  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185469  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.185749  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.185768  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185778  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185786  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.186142  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.186160  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.186149  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.294924  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.294953  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295282  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295301  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295319  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295329  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.295339  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295584  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295615  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295622  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295641  185546 addons.go:475] Verifying addon metrics-server=true in "no-preload-871884"
	I1028 12:17:05.297689  185546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 12:17:02.557465  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:04.557517  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:05.298945  185546 addons.go:510] duration metric: took 1.541528913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 12:17:06.008731  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.766439  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:06.267839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:06.558978  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:09.058557  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:08.507261  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:10.508790  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:11.007666  185546 node_ready.go:49] node "no-preload-871884" has status "Ready":"True"
	I1028 12:17:11.007698  185546 node_ready.go:38] duration metric: took 7.003728813s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:11.007710  185546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:11.014677  185546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020020  185546 pod_ready.go:93] pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:11.020042  185546 pod_ready.go:82] duration metric: took 5.339994ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020053  185546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:08.765053  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.766104  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:11.556955  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.560411  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.027402  185546 pod_ready.go:103] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:14.026501  185546 pod_ready.go:93] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.026537  185546 pod_ready.go:82] duration metric: took 3.006475793s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.026552  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036355  185546 pod_ready.go:93] pod "kube-apiserver-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.036379  185546 pod_ready.go:82] duration metric: took 9.819102ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036391  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042711  185546 pod_ready.go:93] pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.042734  185546 pod_ready.go:82] duration metric: took 6.336523ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042745  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047387  185546 pod_ready.go:93] pod "kube-proxy-6rc4l" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.047409  185546 pod_ready.go:82] duration metric: took 4.657388ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047422  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208217  185546 pod_ready.go:93] pod "kube-scheduler-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.208243  185546 pod_ready.go:82] duration metric: took 160.813834ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208254  185546 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:16.214834  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.268493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:15.271377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.056296  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.557898  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.215767  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:20.219613  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:17.764493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.766909  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:21.769560  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:17:21.057114  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:23.058470  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:25.556210  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:22.714692  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.717301  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.269572  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:26.765467  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:28.056563  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:30.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:27.215356  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.216505  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.267239  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.765267  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:33.055776  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.056525  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.714959  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:33.715292  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.715824  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:34.267155  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:36.765199  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:37.557428  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.057039  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:37.715996  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.213916  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.765841  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:41.267477  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:42.556504  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.557351  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:42.216159  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.216286  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:43.764776  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.265654  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:46.565643  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.057078  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.716667  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.216308  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:48.267160  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.764737  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:51.058399  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.556450  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:55.557043  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:51.715736  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.215689  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:52.764839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.766020  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:57.269346  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:57.557519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.057963  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:56.715168  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:58.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.215445  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:59.271418  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.766158  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:02.556743  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.055348  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.217504  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.715462  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.768899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:06.266111  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:07.055842  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.057870  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:07.716593  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.215089  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:08.266990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.765441  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:11.556422  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.056678  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.216340  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.715086  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.766493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.766870  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.264729  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:16.057216  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.555816  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:20.556292  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.215803  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.215927  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.265178  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.268144  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:22.556987  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.057115  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.715576  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.715814  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.716043  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.767435  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:26.269805  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:27.556197  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.057036  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.216535  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.715902  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.765730  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:31.267947  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:32.556500  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.557446  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.214627  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:35.214782  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.772856  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:36.265722  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:36.559507  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.056993  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:37.215498  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.715541  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:38.266461  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.266611  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:42.268638  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:41.555847  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.558407  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:41.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.716370  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:46.214690  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:44.765752  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:47.266258  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.056747  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.058377  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.557006  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.715456  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.716120  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.267222  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:51.765814  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:18:52.557045  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.057031  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:53.215817  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.714784  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:54.268377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:56.764471  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:57.556098  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.057165  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:57.716269  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.214149  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:58.765585  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:01.265119  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.556763  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.557107  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:02.216779  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.714940  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:03.267189  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.268332  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:07.056718  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:09.056994  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:06.715788  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.716850  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:11.215701  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:07.765389  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:10.268458  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:11.559182  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.057546  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:13.216963  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:15.715550  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:12.764850  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.766597  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.265687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:16.057835  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:18.556414  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.716374  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.216629  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:19.266849  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:21.267040  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:21.056990  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.057233  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:25.555717  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:22.715323  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:24.715818  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.765935  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:26.264839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:27.556370  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.057786  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:27.216093  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:29.715861  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:28.265912  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.266993  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:32.267566  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:32.555888  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.556782  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:31.716178  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.213813  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.213999  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.764307  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.766217  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:37.056333  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.559461  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:38.214406  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:40.214795  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.264988  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:41.267329  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.056568  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:44.057256  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:42.715261  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.215472  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:43.765595  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.766087  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:46.058519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.556533  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:47.215644  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:49.715563  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.266115  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:50.268535  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:52.271227  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.056089  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:53.056973  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:55.057853  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:51.716787  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.214943  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.765208  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:57.267687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:19:57.556056  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.557091  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:00.050404  185942 pod_ready.go:82] duration metric: took 4m0.000726571s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:00.050457  185942 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:00.050479  185942 pod_ready.go:39] duration metric: took 4m12.759391454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:00.050506  185942 kubeadm.go:597] duration metric: took 4m20.427916933s to restartPrimaryControlPlane
	W1028 12:20:00.050569  185942 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:00.050616  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:19:56.715048  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.215821  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.769397  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:02.265702  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:01.715264  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.216628  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.765386  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:06.765802  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:06.715439  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.214561  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.216343  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.265899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.764867  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:13.716362  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.214893  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:14.264333  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.765340  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:18.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:20.715790  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:19.270934  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:21.764931  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:22.715880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:25.216499  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:23.766240  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.271567  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.353961  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.303321788s)
	I1028 12:20:26.354038  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:26.373066  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:26.386209  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:26.398568  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:26.398591  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:26.398634  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:26.410916  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:26.410976  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:26.423771  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:26.435883  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:26.435961  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:26.448506  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.460449  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:26.460506  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.472817  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:26.483653  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:26.483743  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:26.494435  185942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:26.682378  185942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:27.715587  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:29.717407  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:28.766206  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:30.766289  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.820344  185942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:20:35.820446  185942 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:20:35.820555  185942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:20:35.820688  185942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:20:35.820812  185942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:20:35.820902  185942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:20:35.823423  185942 out.go:235]   - Generating certificates and keys ...
	I1028 12:20:35.823594  185942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:20:35.823700  185942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:20:35.823804  185942 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:20:35.823893  185942 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:20:35.824001  185942 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:20:35.824082  185942 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:20:35.824167  185942 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:20:35.824255  185942 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:20:35.824360  185942 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:20:35.824445  185942 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:20:35.824504  185942 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:20:35.824566  185942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:20:35.824622  185942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:20:35.824725  185942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:20:35.824805  185942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:20:35.824944  185942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:20:35.825058  185942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:20:35.825209  185942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:20:35.825300  185942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:20:35.826890  185942 out.go:235]   - Booting up control plane ...
	I1028 12:20:35.827007  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:20:35.827077  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:20:35.827142  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:20:35.827285  185942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:20:35.827420  185942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:20:35.827487  185942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:20:35.827705  185942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:20:35.827848  185942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:20:35.827943  185942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.264999ms
	I1028 12:20:35.828059  185942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:20:35.828130  185942 kubeadm.go:310] [api-check] The API server is healthy after 5.502732581s
	I1028 12:20:35.828299  185942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:20:35.828472  185942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:20:35.828523  185942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:20:35.828712  185942 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:20:35.828764  185942 kubeadm.go:310] [bootstrap-token] Using token: srdxzz.lxk56bs7sgkeocij
	I1028 12:20:35.830228  185942 out.go:235]   - Configuring RBAC rules ...
	I1028 12:20:35.830335  185942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:20:35.830422  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:20:35.830563  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:20:35.830729  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:20:35.830842  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:20:35.830928  185942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:20:35.831065  185942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:20:35.831122  185942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:20:35.831174  185942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:20:35.831181  185942 kubeadm.go:310] 
	I1028 12:20:35.831229  185942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:20:35.831237  185942 kubeadm.go:310] 
	I1028 12:20:35.831302  185942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:20:35.831313  185942 kubeadm.go:310] 
	I1028 12:20:35.831356  185942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:20:35.831439  185942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:20:35.831517  185942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:20:35.831531  185942 kubeadm.go:310] 
	I1028 12:20:35.831616  185942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:20:35.831628  185942 kubeadm.go:310] 
	I1028 12:20:35.831678  185942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:20:35.831682  185942 kubeadm.go:310] 
	I1028 12:20:35.831730  185942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:20:35.831809  185942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:20:35.831921  185942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:20:35.831933  185942 kubeadm.go:310] 
	I1028 12:20:35.832041  185942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:20:35.832141  185942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:20:35.832150  185942 kubeadm.go:310] 
	I1028 12:20:35.832249  185942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832373  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:20:35.832404  185942 kubeadm.go:310] 	--control-plane 
	I1028 12:20:35.832414  185942 kubeadm.go:310] 
	I1028 12:20:35.832516  185942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:20:35.832524  185942 kubeadm.go:310] 
	I1028 12:20:35.832642  185942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832812  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:20:35.832833  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:20:35.832843  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:20:35.834428  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:20:35.835603  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:20:35.847857  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:20:35.867921  185942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:20:35.868088  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:35.868107  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:20:35.908233  185942 ops.go:34] apiserver oom_adj: -16
	I1028 12:20:32.215299  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:34.716880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:32.766922  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.267132  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:36.121114  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:36.621188  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.122032  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.621405  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.122105  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.621960  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.122142  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.622093  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.121643  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.287609  185942 kubeadm.go:1113] duration metric: took 4.419612649s to wait for elevateKubeSystemPrivileges
	I1028 12:20:40.287656  185942 kubeadm.go:394] duration metric: took 5m0.720591132s to StartCluster
	I1028 12:20:40.287703  185942 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.287814  185942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:20:40.290472  185942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.290787  185942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:20:40.291051  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:20:40.290926  185942 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:20:40.291125  185942 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709250"
	I1028 12:20:40.291126  185942 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709250"
	I1028 12:20:40.291142  185942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709250"
	I1028 12:20:40.291148  185942 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709250"
	W1028 12:20:40.291158  185942 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:20:40.291182  185942 addons.go:69] Setting metrics-server=true in profile "embed-certs-709250"
	I1028 12:20:40.291220  185942 addons.go:234] Setting addon metrics-server=true in "embed-certs-709250"
	W1028 12:20:40.291233  185942 addons.go:243] addon metrics-server should already be in state true
	I1028 12:20:40.291282  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291195  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291593  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291631  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291727  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291771  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291786  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291813  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.292877  185942 out.go:177] * Verifying Kubernetes components...
	I1028 12:20:40.294858  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:20:40.310225  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1028 12:20:40.310814  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.311524  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.311552  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.311961  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.312174  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.312867  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1028 12:20:40.312901  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I1028 12:20:40.313354  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313389  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313964  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.313987  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.313967  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.314040  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.314365  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314428  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314883  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.314907  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.315710  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.315744  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.316210  185942 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709250"
	W1028 12:20:40.316229  185942 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:20:40.316261  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.316619  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.316648  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.331940  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1028 12:20:40.332732  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.333487  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.333537  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.333932  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.334145  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.336054  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1028 12:20:40.336291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.336441  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337079  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.337117  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.337211  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I1028 12:20:40.337597  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337998  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338171  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.338189  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.338291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.338925  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338972  185942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:20:40.339570  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.339621  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.340197  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.341080  185942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.341099  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:20:40.341115  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.341872  185942 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:20:40.343244  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:20:40.343278  185942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:20:40.343308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.344718  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345186  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.345216  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345457  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.345666  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.345842  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.346053  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.346977  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347514  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.347546  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347739  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.347936  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.348069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.348236  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.357912  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I1028 12:20:40.358358  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.358838  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.358858  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.359224  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.359441  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.361308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.361630  185942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.361654  185942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:20:40.361675  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.365789  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366319  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.366347  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366659  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.366879  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.367069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.367245  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.526205  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:20:40.545404  185942 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555003  185942 node_ready.go:49] node "embed-certs-709250" has status "Ready":"True"
	I1028 12:20:40.555028  185942 node_ready.go:38] duration metric: took 9.592797ms for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555047  185942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:40.564021  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:40.660020  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:20:40.660061  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:20:40.666435  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.691423  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.692384  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:20:40.692411  185942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:20:40.739518  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:40.739549  185942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:20:40.765228  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:37.216347  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:39.716471  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.192384  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192422  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192491  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192514  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192740  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192759  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192783  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192791  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192915  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192942  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192951  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192962  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.193093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193125  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193131  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.193373  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193403  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193409  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.229776  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.229808  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.230111  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.230127  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.624688  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.624714  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625048  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.625055  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625066  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625074  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.625081  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625283  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625312  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625325  185942 addons.go:475] Verifying addon metrics-server=true in "embed-certs-709250"
	I1028 12:20:41.625329  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.627194  185942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:20:37.771166  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:40.265616  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.265990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.628572  185942 addons.go:510] duration metric: took 1.337655555s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:20:42.572801  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.571062  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.571095  185942 pod_ready.go:82] duration metric: took 3.007040788s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.571110  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576592  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.576620  185942 pod_ready.go:82] duration metric: took 5.500425ms for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576633  185942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:45.583586  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.216524  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:44.715547  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.758721  186547 pod_ready.go:82] duration metric: took 4m0.000295852s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:43.758758  186547 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:43.758783  186547 pod_ready.go:39] duration metric: took 4m13.710127509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:43.758811  186547 kubeadm.go:597] duration metric: took 4m21.647032906s to restartPrimaryControlPlane
	W1028 12:20:43.758873  186547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:43.758910  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:47.089478  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.089502  185942 pod_ready.go:82] duration metric: took 3.512861746s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.089512  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094229  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.094255  185942 pod_ready.go:82] duration metric: took 4.736326ms for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094267  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098823  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.098859  185942 pod_ready.go:82] duration metric: took 4.584003ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098872  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104063  185942 pod_ready.go:93] pod "kube-proxy-gck6r" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.104083  185942 pod_ready.go:82] duration metric: took 5.204526ms for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104091  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168177  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.168210  185942 pod_ready.go:82] duration metric: took 64.110225ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168221  185942 pod_ready.go:39] duration metric: took 6.613160968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:47.168243  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:20:47.168309  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:47.186907  185942 api_server.go:72] duration metric: took 6.896070864s to wait for apiserver process to appear ...
	I1028 12:20:47.186944  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:20:47.186998  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:20:47.191428  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:20:47.192677  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:20:47.192708  185942 api_server.go:131] duration metric: took 5.753471ms to wait for apiserver health ...
	I1028 12:20:47.192719  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:20:47.372534  185942 system_pods.go:59] 9 kube-system pods found
	I1028 12:20:47.372571  185942 system_pods.go:61] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.372580  185942 system_pods.go:61] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.372585  185942 system_pods.go:61] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.372590  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.372595  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.372599  185942 system_pods.go:61] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.372605  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.372614  185942 system_pods.go:61] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.372620  185942 system_pods.go:61] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.372633  185942 system_pods.go:74] duration metric: took 179.905205ms to wait for pod list to return data ...
	I1028 12:20:47.372647  185942 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:20:47.569853  185942 default_sa.go:45] found service account: "default"
	I1028 12:20:47.569886  185942 default_sa.go:55] duration metric: took 197.228265ms for default service account to be created ...
	I1028 12:20:47.569900  185942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:20:47.770906  185942 system_pods.go:86] 9 kube-system pods found
	I1028 12:20:47.770941  185942 system_pods.go:89] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.770948  185942 system_pods.go:89] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.770953  185942 system_pods.go:89] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.770956  185942 system_pods.go:89] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.770960  185942 system_pods.go:89] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.770964  185942 system_pods.go:89] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.770967  185942 system_pods.go:89] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.770973  185942 system_pods.go:89] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.770977  185942 system_pods.go:89] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.770984  185942 system_pods.go:126] duration metric: took 201.078078ms to wait for k8s-apps to be running ...
	I1028 12:20:47.770990  185942 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:20:47.771033  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:47.787139  185942 system_svc.go:56] duration metric: took 16.13776ms WaitForService to wait for kubelet
	I1028 12:20:47.787171  185942 kubeadm.go:582] duration metric: took 7.496343244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:20:47.787191  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:20:47.969485  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:20:47.969516  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:20:47.969547  185942 node_conditions.go:105] duration metric: took 182.350787ms to run NodePressure ...
	I1028 12:20:47.969562  185942 start.go:241] waiting for startup goroutines ...
	I1028 12:20:47.969572  185942 start.go:246] waiting for cluster config update ...
	I1028 12:20:47.969586  185942 start.go:255] writing updated cluster config ...
	I1028 12:20:47.969916  185942 ssh_runner.go:195] Run: rm -f paused
	I1028 12:20:48.021806  185942 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:20:48.023816  185942 out.go:177] * Done! kubectl is now configured to use "embed-certs-709250" cluster and "default" namespace by default
	I1028 12:20:46.716844  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:49.216673  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:51.715101  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:53.715509  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:56.217201  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:58.715405  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:00.715890  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:03.214669  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:05.215054  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.108895  186547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.349960271s)
	I1028 12:21:10.108979  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:10.126064  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:21:10.139862  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:21:10.150752  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:21:10.150780  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:21:10.150837  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:21:10.161522  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:21:10.161604  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:21:10.172230  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:21:10.183231  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:21:10.183299  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:21:10.194261  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.204462  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:21:10.204524  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.214991  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:21:10.225246  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:21:10.225315  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:21:10.235439  186547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:21:10.280951  186547 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:21:10.281020  186547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:21:10.391997  186547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:21:10.392163  186547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:21:10.392297  186547 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:21:10.402113  186547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:21:07.217549  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:09.716985  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.404087  186547 out.go:235]   - Generating certificates and keys ...
	I1028 12:21:10.404194  186547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:21:10.404312  186547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:21:10.404441  186547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:21:10.404537  186547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:21:10.404642  186547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:21:10.404719  186547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:21:10.404824  186547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:21:10.404914  186547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:21:10.405021  186547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:21:10.405124  186547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:21:10.405185  186547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:21:10.405269  186547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:21:10.608657  186547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:21:10.910608  186547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:21:11.076768  186547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:21:11.244109  186547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:21:11.685910  186547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:21:11.686470  186547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:21:11.692266  186547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:21:11.694100  186547 out.go:235]   - Booting up control plane ...
	I1028 12:21:11.694231  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:21:11.694377  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:21:11.694607  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:21:11.713908  186547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:21:11.720788  186547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:21:11.720874  186547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:21:11.856867  186547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:21:11.856998  186547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:21:12.358968  186547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942759ms
	I1028 12:21:12.359067  186547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:21:12.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:14.208408  185546 pod_ready.go:82] duration metric: took 4m0.000135609s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:21:14.208447  185546 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:21:14.208457  185546 pod_ready.go:39] duration metric: took 4m3.200735753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:14.208485  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:14.208519  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:14.208571  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:14.266154  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.266184  185546 cri.go:89] found id: ""
	I1028 12:21:14.266196  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:14.266255  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.271416  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:14.271497  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:14.310426  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.310457  185546 cri.go:89] found id: ""
	I1028 12:21:14.310467  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:14.310529  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.314961  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:14.315037  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:14.362502  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.362530  185546 cri.go:89] found id: ""
	I1028 12:21:14.362540  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:14.362602  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.368118  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:14.368198  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:14.416827  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.416867  185546 cri.go:89] found id: ""
	I1028 12:21:14.416877  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:14.416943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.421640  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:14.421716  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:14.473506  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:14.473552  185546 cri.go:89] found id: ""
	I1028 12:21:14.473563  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:14.473627  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.480106  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:14.480183  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:14.529939  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:14.529964  185546 cri.go:89] found id: ""
	I1028 12:21:14.529971  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:14.530120  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.536199  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:14.536264  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:14.578374  185546 cri.go:89] found id: ""
	I1028 12:21:14.578407  185546 logs.go:282] 0 containers: []
	W1028 12:21:14.578419  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:14.578428  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:14.578490  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:14.620216  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:14.620243  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:14.620249  185546 cri.go:89] found id: ""
	I1028 12:21:14.620258  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:14.620323  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.625798  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.630653  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:14.630683  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:14.645364  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:14.645404  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.686202  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:14.686234  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.730094  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:14.730125  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:14.786272  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:14.786322  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:14.875705  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:14.875746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.931913  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:14.931960  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.991914  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:14.991953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:15.037022  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:15.037056  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:15.107597  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:15.107649  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:15.161401  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:15.161442  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:15.201916  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:15.201953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:15.682647  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:15.682694  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:17.861193  186547 kubeadm.go:310] [api-check] The API server is healthy after 5.502448006s
	I1028 12:21:17.874856  186547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:21:17.889216  186547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:21:17.933411  186547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:21:17.933726  186547 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-349222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:21:17.964667  186547 kubeadm.go:310] [bootstrap-token] Using token: o3vo7c.1x7759cggrb8kl7r
	I1028 12:21:17.966405  186547 out.go:235]   - Configuring RBAC rules ...
	I1028 12:21:17.966590  186547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:21:17.982231  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:21:17.991850  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:21:17.996073  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:21:18.003531  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:21:18.008369  186547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:21:18.272751  186547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:21:18.724493  186547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:21:19.269583  186547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:21:19.270654  186547 kubeadm.go:310] 
	I1028 12:21:19.270715  186547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:21:19.270722  186547 kubeadm.go:310] 
	I1028 12:21:19.270782  186547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:21:19.270787  186547 kubeadm.go:310] 
	I1028 12:21:19.270816  186547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:21:19.270875  186547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:21:19.270938  186547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:21:19.270949  186547 kubeadm.go:310] 
	I1028 12:21:19.271022  186547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:21:19.271063  186547 kubeadm.go:310] 
	I1028 12:21:19.271165  186547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:21:19.271190  186547 kubeadm.go:310] 
	I1028 12:21:19.271266  186547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:21:19.271380  186547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:21:19.271470  186547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:21:19.271479  186547 kubeadm.go:310] 
	I1028 12:21:19.271600  186547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:21:19.271697  186547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:21:19.271709  186547 kubeadm.go:310] 
	I1028 12:21:19.271838  186547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272010  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:21:19.272068  186547 kubeadm.go:310] 	--control-plane 
	I1028 12:21:19.272079  186547 kubeadm.go:310] 
	I1028 12:21:19.272250  186547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:21:19.272270  186547 kubeadm.go:310] 
	I1028 12:21:19.272391  186547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272568  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:21:19.273899  186547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:21:19.273955  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:21:19.273977  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:21:19.275868  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:21:18.355132  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:18.373260  185546 api_server.go:72] duration metric: took 4m14.615888944s to wait for apiserver process to appear ...
	I1028 12:21:18.373292  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:18.373353  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:18.373410  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:18.413207  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.413239  185546 cri.go:89] found id: ""
	I1028 12:21:18.413250  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:18.413336  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.419588  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:18.419655  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:18.476341  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.476373  185546 cri.go:89] found id: ""
	I1028 12:21:18.476383  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:18.476450  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.482835  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:18.482926  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:18.524934  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.524964  185546 cri.go:89] found id: ""
	I1028 12:21:18.524975  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:18.525040  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.530198  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:18.530284  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:18.577310  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:18.577338  185546 cri.go:89] found id: ""
	I1028 12:21:18.577349  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:18.577413  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.583048  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:18.583133  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:18.622556  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:18.622587  185546 cri.go:89] found id: ""
	I1028 12:21:18.622598  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:18.622701  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.628450  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:18.628540  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:18.674827  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:18.674861  185546 cri.go:89] found id: ""
	I1028 12:21:18.674873  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:18.674943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.680282  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:18.680354  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:18.738014  185546 cri.go:89] found id: ""
	I1028 12:21:18.738044  185546 logs.go:282] 0 containers: []
	W1028 12:21:18.738061  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:18.738070  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:18.738142  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:18.780615  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:18.780645  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:18.780651  185546 cri.go:89] found id: ""
	I1028 12:21:18.780660  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:18.780725  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.786003  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.790208  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:18.790231  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:18.806481  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:18.806523  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.853343  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:18.853382  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.906386  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:18.906424  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.948149  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:18.948182  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:19.000642  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:19.000678  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:19.038715  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:19.038744  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:19.079234  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:19.079271  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:19.147309  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:19.147349  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:19.271582  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:19.271620  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:19.319149  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:19.319195  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:19.385399  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:19.385437  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:19.811993  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:19.812035  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:19.277402  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:21:19.296307  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:21:19.323315  186547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-349222 minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=default-k8s-diff-port-349222 minikube.k8s.io/primary=true
	I1028 12:21:19.550855  186547 ops.go:34] apiserver oom_adj: -16
	I1028 12:21:19.550882  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.051004  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.551001  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.051215  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.551283  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.050989  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.551423  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.051101  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.151453  186547 kubeadm.go:1113] duration metric: took 3.828156807s to wait for elevateKubeSystemPrivileges
	I1028 12:21:23.151505  186547 kubeadm.go:394] duration metric: took 5m1.103220882s to StartCluster
	I1028 12:21:23.151530  186547 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.151623  186547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:21:23.153557  186547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.153874  186547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:21:23.153996  186547 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:21:23.154101  186547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154122  186547 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154133  186547 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:21:23.154128  186547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154165  186547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-349222"
	I1028 12:21:23.154160  186547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154197  186547 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154213  186547 addons.go:243] addon metrics-server should already be in state true
	I1028 12:21:23.154167  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154254  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154664  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154679  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154749  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154135  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:21:23.154803  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154844  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154948  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.155649  186547 out.go:177] * Verifying Kubernetes components...
	I1028 12:21:23.157234  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:21:23.172278  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I1028 12:21:23.172870  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.173402  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.173429  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.173851  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.174056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.176299  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1028 12:21:23.176307  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I1028 12:21:23.176897  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177023  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177553  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177576  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177589  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177606  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177887  186547 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.177912  186547 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:21:23.177945  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.177971  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178030  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178369  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178404  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178541  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178572  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178961  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.179002  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.196089  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1028 12:21:23.197979  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.198578  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.198607  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.199082  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.199301  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.199604  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1028 12:21:23.200120  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.200519  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.200539  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.200938  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.201204  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.201711  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.201794  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1028 12:21:23.202225  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.202937  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.202956  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.203305  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.203753  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.203791  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.204026  186547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:21:23.204210  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.205470  186547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:21:23.205490  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:21:23.205554  186547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:21:23.205576  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.207334  186547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.207352  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:21:23.207372  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.209573  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.210230  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210366  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.210608  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.210806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.211061  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.211884  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.211910  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.211928  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.212104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.212351  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.212570  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.212762  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.231664  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1028 12:21:23.232283  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.232904  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.232929  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.233414  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.233658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.236162  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.236665  186547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.236680  186547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:21:23.236700  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.240368  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.240697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240848  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.241034  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.241156  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.241281  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.409461  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:21:23.430686  186547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442439  186547 node_ready.go:49] node "default-k8s-diff-port-349222" has status "Ready":"True"
	I1028 12:21:23.442466  186547 node_ready.go:38] duration metric: took 11.749381ms for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442480  186547 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:23.447741  186547 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:23.515393  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.545556  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.575253  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:21:23.575280  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:21:23.663892  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:21:23.663920  186547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:21:23.745621  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:23.745656  186547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:21:23.823360  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:24.391754  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.391789  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.392092  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.392112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.392123  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.392130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393697  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393716  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.393725  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.393733  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393810  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393828  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393886  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394088  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.394112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.413957  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.414000  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.414363  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.414385  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853053  186547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029641945s)
	I1028 12:21:24.853107  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853123  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853434  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.853492  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853501  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853518  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853543  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853784  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853801  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853813  186547 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-349222"
	I1028 12:21:24.855707  186547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:21:22.373623  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:21:22.379559  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:21:22.380750  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:22.380772  185546 api_server.go:131] duration metric: took 4.007460794s to wait for apiserver health ...
	I1028 12:21:22.380783  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:22.380811  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:22.380875  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:22.426678  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:22.426710  185546 cri.go:89] found id: ""
	I1028 12:21:22.426720  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:22.426781  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.431942  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:22.432014  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:22.472504  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:22.472531  185546 cri.go:89] found id: ""
	I1028 12:21:22.472540  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:22.472595  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.478446  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:22.478511  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:22.520149  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.520169  185546 cri.go:89] found id: ""
	I1028 12:21:22.520177  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:22.520235  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.525716  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:22.525804  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:22.564801  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:22.564832  185546 cri.go:89] found id: ""
	I1028 12:21:22.564844  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:22.564909  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.570065  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:22.570147  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:22.613601  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.613628  185546 cri.go:89] found id: ""
	I1028 12:21:22.613637  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:22.613700  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.618413  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:22.618483  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:22.664329  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.664358  185546 cri.go:89] found id: ""
	I1028 12:21:22.664369  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:22.664430  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.669013  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:22.669084  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:22.706046  185546 cri.go:89] found id: ""
	I1028 12:21:22.706074  185546 logs.go:282] 0 containers: []
	W1028 12:21:22.706084  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:22.706091  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:22.706159  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:22.747718  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.747744  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.747750  185546 cri.go:89] found id: ""
	I1028 12:21:22.747759  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:22.747825  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.752857  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.758383  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:22.758410  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.800846  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:22.800882  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.858663  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:22.858702  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.896915  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:22.896959  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.938476  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:22.938503  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.984601  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:22.984628  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:23.000223  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:23.000259  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:23.130709  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:23.130746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:23.189821  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:23.189859  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:23.244463  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:23.244535  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:23.299279  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:23.299318  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:23.714691  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:23.714730  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:23.777703  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:23.777749  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:26.364133  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:21:26.364166  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.364171  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.364175  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.364179  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.364182  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.364185  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.364191  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.364195  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.364201  185546 system_pods.go:74] duration metric: took 3.98341316s to wait for pod list to return data ...
	I1028 12:21:26.364209  185546 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:26.366899  185546 default_sa.go:45] found service account: "default"
	I1028 12:21:26.366925  185546 default_sa.go:55] duration metric: took 2.710943ms for default service account to be created ...
	I1028 12:21:26.366934  185546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:26.371193  185546 system_pods.go:86] 8 kube-system pods found
	I1028 12:21:26.371219  185546 system_pods.go:89] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.371224  185546 system_pods.go:89] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.371228  185546 system_pods.go:89] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.371233  185546 system_pods.go:89] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.371237  185546 system_pods.go:89] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.371240  185546 system_pods.go:89] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.371246  185546 system_pods.go:89] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.371250  185546 system_pods.go:89] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.371257  185546 system_pods.go:126] duration metric: took 4.318058ms to wait for k8s-apps to be running ...
	I1028 12:21:26.371265  185546 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:26.371317  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:26.389093  185546 system_svc.go:56] duration metric: took 17.81758ms WaitForService to wait for kubelet
	I1028 12:21:26.389131  185546 kubeadm.go:582] duration metric: took 4m22.631766189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:26.389158  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:26.392700  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:26.392728  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:26.392741  185546 node_conditions.go:105] duration metric: took 3.576663ms to run NodePressure ...
	I1028 12:21:26.392757  185546 start.go:241] waiting for startup goroutines ...
	I1028 12:21:26.392766  185546 start.go:246] waiting for cluster config update ...
	I1028 12:21:26.392781  185546 start.go:255] writing updated cluster config ...
	I1028 12:21:26.393086  185546 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:26.444274  185546 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:26.446322  185546 out.go:177] * Done! kubectl is now configured to use "no-preload-871884" cluster and "default" namespace by default
	I1028 12:21:24.856866  186547 addons.go:510] duration metric: took 1.702877543s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:21:25.462800  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:27.954511  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:30.454530  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.455161  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.955218  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.955242  186547 pod_ready.go:82] duration metric: took 9.507473956s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.955253  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.960990  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.961018  186547 pod_ready.go:82] duration metric: took 5.757431ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.961032  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966957  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.966981  186547 pod_ready.go:82] duration metric: took 5.940549ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966991  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972168  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.972194  186547 pod_ready.go:82] duration metric: took 5.195057ms for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972205  186547 pod_ready.go:39] duration metric: took 9.529713389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:32.972224  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:32.972294  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:32.988675  186547 api_server.go:72] duration metric: took 9.83476496s to wait for apiserver process to appear ...
	I1028 12:21:32.988711  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:32.988736  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:21:32.993068  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:21:32.994352  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:32.994377  186547 api_server.go:131] duration metric: took 5.656136ms to wait for apiserver health ...
	I1028 12:21:32.994387  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:32.999982  186547 system_pods.go:59] 9 kube-system pods found
	I1028 12:21:33.000010  186547 system_pods.go:61] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.000017  186547 system_pods.go:61] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.000024  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.000029  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.000033  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.000037  186547 system_pods.go:61] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.000040  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.000046  186547 system_pods.go:61] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.000051  186547 system_pods.go:61] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.000064  186547 system_pods.go:74] duration metric: took 5.66991ms to wait for pod list to return data ...
	I1028 12:21:33.000075  186547 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:33.003124  186547 default_sa.go:45] found service account: "default"
	I1028 12:21:33.003149  186547 default_sa.go:55] duration metric: took 3.067652ms for default service account to be created ...
	I1028 12:21:33.003159  186547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:33.155864  186547 system_pods.go:86] 9 kube-system pods found
	I1028 12:21:33.155902  186547 system_pods.go:89] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.155914  186547 system_pods.go:89] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.155921  186547 system_pods.go:89] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.155931  186547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.155938  186547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.155943  186547 system_pods.go:89] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.155948  186547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.155956  186547 system_pods.go:89] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.155965  186547 system_pods.go:89] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.155977  186547 system_pods.go:126] duration metric: took 152.809784ms to wait for k8s-apps to be running ...
	I1028 12:21:33.155991  186547 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:33.156049  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:33.171592  186547 system_svc.go:56] duration metric: took 15.589436ms WaitForService to wait for kubelet
	I1028 12:21:33.171647  186547 kubeadm.go:582] duration metric: took 10.017726239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:33.171672  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:33.352932  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:33.352984  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:33.352995  186547 node_conditions.go:105] duration metric: took 181.317488ms to run NodePressure ...
	I1028 12:21:33.353006  186547 start.go:241] waiting for startup goroutines ...
	I1028 12:21:33.353014  186547 start.go:246] waiting for cluster config update ...
	I1028 12:21:33.353024  186547 start.go:255] writing updated cluster config ...
	I1028 12:21:33.353314  186547 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:33.405276  186547 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:33.407589  186547 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-349222" cluster and "default" namespace by default
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.509093954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118635509059257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bc63e85-803d-4f05-8ee7-3490654c5966 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.509848311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6846ae3e-9ce1-4258-bca7-0a518c6dabc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.509913626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6846ae3e-9ce1-4258-bca7-0a518c6dabc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.510163117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6846ae3e-9ce1-4258-bca7-0a518c6dabc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.553601135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa52eaa9-e3bb-4f70-acc4-ef89ac2d3c83 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.553683792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa52eaa9-e3bb-4f70-acc4-ef89ac2d3c83 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.554999821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e543fb54-0b01-42c5-bcae-c5cb56643e13 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.555677882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118635555642059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e543fb54-0b01-42c5-bcae-c5cb56643e13 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.556630493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8349321a-21be-4269-ae75-95a55fdcf29b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.556712882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8349321a-21be-4269-ae75-95a55fdcf29b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.556970605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8349321a-21be-4269-ae75-95a55fdcf29b name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.604928675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=385ccbdb-4256-42fb-99a3-87ddde24ff3b name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.605006640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=385ccbdb-4256-42fb-99a3-87ddde24ff3b name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.606405824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ef70c4b-5d81-46ea-85f0-f9cdcfa31bd5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.607038101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118635607014608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ef70c4b-5d81-46ea-85f0-f9cdcfa31bd5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.607745393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f262d6db-fce8-44af-ad25-791afe569bb9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.607796241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f262d6db-fce8-44af-ad25-791afe569bb9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.608086023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f262d6db-fce8-44af-ad25-791afe569bb9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.646889908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3653be4-b6f7-4405-bab6-9e1fd7c1dbbf name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.646962641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3653be4-b6f7-4405-bab6-9e1fd7c1dbbf name=/runtime.v1.RuntimeService/Version
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.649005905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77141cd7-442c-47cb-a774-22f416d1d197 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.649611337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118635649573851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77141cd7-442c-47cb-a774-22f416d1d197 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.650524686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e11d900-930f-4528-a026-d5c6eb667bed name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.650605682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e11d900-930f-4528-a026-d5c6eb667bed name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:30:35 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:30:35.650915487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e11d900-930f-4528-a026-d5c6eb667bed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3a7aceb893fee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   9dfe25fb53ae1       coredns-7c65d6cfc9-nkcb7
	8f42d75c0ae9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   84b37f3c41fb7       storage-provisioner
	f47658c9ee366       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   bb0049c7ac79f       coredns-7c65d6cfc9-rxfxk
	c06cad6ecf391       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   848af5b289652       kube-proxy-6krbc
	5c2ab4a694be8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   959c35f94c2d4       etcd-default-k8s-diff-port-349222
	6c7c91e017ca1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   9472931a10611       kube-controller-manager-default-k8s-diff-port-349222
	a0e1fe9e1548a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   df35e1501e17f       kube-apiserver-default-k8s-diff-port-349222
	871982dcccfa5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   99b42080bcf0c       kube-scheduler-default-k8s-diff-port-349222
	558c1f7b76098       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   3760c60af964c       kube-apiserver-default-k8s-diff-port-349222
	
	
	==> coredns [3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-349222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-349222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=default-k8s-diff-port-349222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:21:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-349222
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:30:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:26:36 +0000   Mon, 28 Oct 2024 12:21:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:26:36 +0000   Mon, 28 Oct 2024 12:21:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:26:36 +0000   Mon, 28 Oct 2024 12:21:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:26:36 +0000   Mon, 28 Oct 2024 12:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.75
	  Hostname:    default-k8s-diff-port-349222
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 97b39fb3738145a4a89a71ccc8a6b7ec
	  System UUID:                97b39fb3-7381-45a4-a89a-71ccc8a6b7ec
	  Boot ID:                    3e81d451-65bb-48aa-924b-f60b7c7ff158
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nkcb7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-rxfxk                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-349222                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-349222             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-349222    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-6krbc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-349222             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-4xgsk                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-349222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-349222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-349222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-349222 event: Registered Node default-k8s-diff-port-349222 in Controller
	
	
	==> dmesg <==
	[  +0.054637] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046276] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct28 12:16] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.933083] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.644120] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.607521] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.068120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058730] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.219937] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.134028] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.345433] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.621163] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.076382] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.994797] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[  +5.674875] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.847208] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 12:21] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.916137] systemd-fstab-generator[2599]: Ignoring "noauto" option for root device
	[  +4.455498] kauditd_printk_skb: 58 callbacks suppressed
	[  +2.098111] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +4.943388] systemd-fstab-generator[3037]: Ignoring "noauto" option for root device
	[  +0.124254] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.298044] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05] <==
	{"level":"info","ts":"2024-10-28T12:21:13.308311Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-28T12:21:13.308546Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"36333190e10008e7","initial-advertise-peer-urls":["https://192.168.50.75:2380"],"listen-peer-urls":["https://192.168.50.75:2380"],"advertise-client-urls":["https://192.168.50.75:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.75:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-28T12:21:13.308566Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-28T12:21:13.308651Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.75:2380"}
	{"level":"info","ts":"2024-10-28T12:21:13.308724Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.75:2380"}
	{"level":"info","ts":"2024-10-28T12:21:14.182347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-28T12:21:14.182429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-28T12:21:14.182472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 received MsgPreVoteResp from 36333190e10008e7 at term 1"}
	{"level":"info","ts":"2024-10-28T12:21:14.182493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became candidate at term 2"}
	{"level":"info","ts":"2024-10-28T12:21:14.182526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 received MsgVoteResp from 36333190e10008e7 at term 2"}
	{"level":"info","ts":"2024-10-28T12:21:14.182543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36333190e10008e7 became leader at term 2"}
	{"level":"info","ts":"2024-10-28T12:21:14.182556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 36333190e10008e7 elected leader 36333190e10008e7 at term 2"}
	{"level":"info","ts":"2024-10-28T12:21:14.188349Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:21:14.192501Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"36333190e10008e7","local-member-attributes":"{Name:default-k8s-diff-port-349222 ClientURLs:[https://192.168.50.75:2379]}","request-path":"/0/members/36333190e10008e7/attributes","cluster-id":"5bdbf71200db9bfc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-28T12:21:14.193293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:21:14.193779Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-28T12:21:14.196378Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5bdbf71200db9bfc","local-member-id":"36333190e10008e7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:21:14.196500Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:21:14.196551Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-28T12:21:14.197304Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-28T12:21:14.197345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-28T12:21:14.198038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:21:14.199085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:21:14.203476Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:21:14.232390Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.75:2379"}
	
	
	==> kernel <==
	 12:30:36 up 14 min,  0 users,  load average: 0.21, 0.11, 0.09
	Linux default-k8s-diff-port-349222 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487] <==
	W1028 12:21:04.690925       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.691177       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.736979       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.749611       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.769659       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.825014       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.832723       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.886964       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.949711       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.977736       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.044495       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.060208       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.146554       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.286592       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.352424       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.569913       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.708692       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:08.465637       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:08.748326       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.042483       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.138741       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.431553       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.477715       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.506940       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.609540       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd] <==
	W1028 12:26:17.033067       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:26:17.033158       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:26:17.034294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:26:17.034318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:27:17.034907       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:27:17.035019       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:27:17.035104       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:27:17.035133       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:27:17.036337       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:27:17.036405       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:29:17.036821       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:29:17.036962       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:29:17.037074       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:29:17.037140       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:29:17.038133       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:29:17.038201       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e] <==
	E1028 12:25:23.043993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:25:23.510495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:25:53.050162       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:25:53.532668       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:26:23.057866       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:26:23.540651       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:26:36.246486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-349222"
	E1028 12:26:53.066108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:26:53.549578       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:27:23.074218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:27:23.558510       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:27:26.628703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="342.347µs"
	I1028 12:27:40.617591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="56.471µs"
	E1028 12:27:53.080767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:27:53.573870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:28:23.086893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:28:23.582574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:28:53.096783       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:28:53.591154       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:29:23.105131       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:29:23.600053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:29:53.112341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:29:53.619540       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:30:23.119951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:30:23.628675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:21:25.605114       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:21:25.614635       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.75"]
	E1028 12:21:25.614731       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:21:25.651647       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:21:25.651699       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:21:25.651732       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:21:25.654413       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:21:25.654732       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:21:25.654759       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:21:25.656192       1 config.go:199] "Starting service config controller"
	I1028 12:21:25.656227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:21:25.656319       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:21:25.656341       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:21:25.658899       1 config.go:328] "Starting node config controller"
	I1028 12:21:25.658975       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:21:25.757007       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 12:21:25.757332       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:21:25.759027       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38] <==
	W1028 12:21:16.966122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:21:16.966304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.016952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:21:17.017006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.019338       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:21:17.019386       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 12:21:17.024030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.024164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.134329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.134463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.140161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:21:17.140412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.154720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:21:17.154775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.198423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:21:17.198492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.206440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 12:21:17.206495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.246340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.246456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.368346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 12:21:17.368395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.377145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.377215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 12:21:20.266218       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:29:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:18.847500    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118558846542518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:28 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:28.848925    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118568848677583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:28 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:28.848967    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118568848677583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:30 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:30.601116    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:29:38 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:38.850808    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118578850603014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:38 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:38.850856    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118578850603014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:42 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:42.603472    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:29:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:48.851825    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118588851612796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:48.851858    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118588851612796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:57 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:57.602006    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:29:58 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:58.854416    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118598853590139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:29:58 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:29:58.854472    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118598853590139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:08 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:08.855428    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118608855130814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:08 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:08.855711    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118608855130814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:12 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:12.602441    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:18.635595    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:18.857677    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118618857220243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:18.857964    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118618857220243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:26 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:26.603200    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:30:28 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:28.859411    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118628858993060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:30:28 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:30:28.859471    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118628858993060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c] <==
	I1028 12:21:25.319679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:21:25.464715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:21:25.464760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:21:25.498933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:21:25.499105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-349222_a78ad73a-4d1f-4a3a-b56d-98d17bafc5cc!
	I1028 12:21:25.507084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7e1418f-921a-4177-89d4-79db96a98cb8", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-349222_a78ad73a-4d1f-4a3a-b56d-98d17bafc5cc became leader
	I1028 12:21:25.599565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-349222_a78ad73a-4d1f-4a3a-b56d-98d17bafc5cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4xgsk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 describe pod metrics-server-6867b74b74-4xgsk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349222 describe pod metrics-server-6867b74b74-4xgsk: exit status 1 (68.249764ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4xgsk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-349222 describe pod metrics-server-6867b74b74-4xgsk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1028 12:25:09.887104  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1028 12:27:38.998248  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1028 12:30:09.886554  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1028 12:30:42.073198  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1028 12:32:38.998742  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (231.905616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-089993" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (240.577566ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-089993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-089993 logs -n 25: (1.574124675s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:13:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:13:02.452508  186547 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:13:02.452621  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452630  186547 out.go:358] Setting ErrFile to fd 2...
	I1028 12:13:02.452635  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452828  186547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:13:02.453378  186547 out.go:352] Setting JSON to false
	I1028 12:13:02.454320  186547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6925,"bootTime":1730110657,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:13:02.454420  186547 start.go:139] virtualization: kvm guest
	I1028 12:13:02.456754  186547 out.go:177] * [default-k8s-diff-port-349222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:13:02.458343  186547 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:13:02.458413  186547 notify.go:220] Checking for updates...
	I1028 12:13:02.460946  186547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:13:02.462089  186547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:13:02.463460  186547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:13:02.464649  186547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:13:02.466107  186547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:13:02.468142  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:13:02.468518  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.468587  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.483793  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1028 12:13:02.484273  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.484861  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.484884  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.485260  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.485471  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.485712  186547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:13:02.485997  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.486030  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.501110  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1028 12:13:02.501722  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.502335  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.502362  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.502682  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.502878  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.539766  186547 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:13:02.541024  186547 start.go:297] selected driver: kvm2
	I1028 12:13:02.541038  186547 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.541168  186547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:13:02.541929  186547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.542014  186547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:13:02.557443  186547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:13:02.557868  186547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:13:02.557902  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:13:02.557947  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:13:02.557987  186547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.558098  186547 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.560651  186547 out.go:177] * Starting "default-k8s-diff-port-349222" primary control-plane node in "default-k8s-diff-port-349222" cluster
	I1028 12:13:02.693744  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:02.561767  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:13:02.561800  186547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:13:02.561810  186547 cache.go:56] Caching tarball of preloaded images
	I1028 12:13:02.561877  186547 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:13:02.561887  186547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:13:02.561973  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:13:02.562165  186547 start.go:360] acquireMachinesLock for default-k8s-diff-port-349222: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:13:08.773770  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:11.845825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:17.925957  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:20.997733  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:27.077858  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:30.149737  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:36.229851  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:39.301764  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:45.381781  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:48.453767  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:54.533793  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:57.605754  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:03.685848  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:06.757874  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:12.837829  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:15.909778  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:21.989850  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:25.061812  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:31.141825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:34.213757  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:40.293844  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:43.365865  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:49.445872  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:52.517750  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:58.597834  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:01.669837  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:07.749853  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:10.821838  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:13.826298  185942 start.go:364] duration metric: took 3m37.788021766s to acquireMachinesLock for "embed-certs-709250"
	I1028 12:15:13.826369  185942 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:13.826382  185942 fix.go:54] fixHost starting: 
	I1028 12:15:13.827047  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:13.827113  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:13.842889  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1028 12:15:13.843403  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:13.843915  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:15:13.843938  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:13.844374  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:13.844568  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:13.844733  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:15:13.846440  185942 fix.go:112] recreateIfNeeded on embed-certs-709250: state=Stopped err=<nil>
	I1028 12:15:13.846464  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	W1028 12:15:13.846629  185942 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:13.848878  185942 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709250" ...
	I1028 12:15:13.850607  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Start
	I1028 12:15:13.850800  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring networks are active...
	I1028 12:15:13.851930  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network default is active
	I1028 12:15:13.852331  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network mk-embed-certs-709250 is active
	I1028 12:15:13.852652  185942 main.go:141] libmachine: (embed-certs-709250) Getting domain xml...
	I1028 12:15:13.853394  185942 main.go:141] libmachine: (embed-certs-709250) Creating domain...
	I1028 12:15:15.098667  185942 main.go:141] libmachine: (embed-certs-709250) Waiting to get IP...
	I1028 12:15:15.099525  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.099919  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.099951  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.099877  187018 retry.go:31] will retry after 285.25732ms: waiting for machine to come up
	I1028 12:15:15.386531  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.386992  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.387023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.386921  187018 retry.go:31] will retry after 327.08041ms: waiting for machine to come up
	I1028 12:15:15.715435  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.715900  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.715928  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.715846  187018 retry.go:31] will retry after 443.083162ms: waiting for machine to come up
	I1028 12:15:13.823652  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:13.823724  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824056  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:15:13.824085  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824284  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:15:13.826158  185546 machine.go:96] duration metric: took 4m37.413470632s to provisionDockerMachine
	I1028 12:15:13.826202  185546 fix.go:56] duration metric: took 4m37.436313043s for fixHost
	I1028 12:15:13.826208  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 4m37.436350273s
	W1028 12:15:13.826226  185546 start.go:714] error starting host: provision: host is not running
	W1028 12:15:13.826336  185546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 12:15:13.826346  185546 start.go:729] Will try again in 5 seconds ...
	I1028 12:15:16.160595  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.161024  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.161045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.161003  187018 retry.go:31] will retry after 599.535995ms: waiting for machine to come up
	I1028 12:15:16.761771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.762167  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.762213  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.762114  187018 retry.go:31] will retry after 527.275961ms: waiting for machine to come up
	I1028 12:15:17.290788  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:17.291124  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:17.291145  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:17.291098  187018 retry.go:31] will retry after 858.175967ms: waiting for machine to come up
	I1028 12:15:18.150644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.151045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.151071  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.150996  187018 retry.go:31] will retry after 727.962346ms: waiting for machine to come up
	I1028 12:15:18.880545  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.880990  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.881020  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.880942  187018 retry.go:31] will retry after 1.184956373s: waiting for machine to come up
	I1028 12:15:20.067178  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:20.067603  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:20.067635  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:20.067553  187018 retry.go:31] will retry after 1.635056202s: waiting for machine to come up
	I1028 12:15:18.827987  185546 start.go:360] acquireMachinesLock for no-preload-871884: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:21.703941  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:21.704338  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:21.704365  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:21.704302  187018 retry.go:31] will retry after 1.865473383s: waiting for machine to come up
	I1028 12:15:23.572362  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:23.572816  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:23.572843  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:23.572773  187018 retry.go:31] will retry after 2.604970031s: waiting for machine to come up
	I1028 12:15:26.181289  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:26.181849  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:26.181880  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:26.181788  187018 retry.go:31] will retry after 2.866004055s: waiting for machine to come up
	I1028 12:15:29.049604  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:29.050029  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:29.050068  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:29.049970  187018 retry.go:31] will retry after 3.046879869s: waiting for machine to come up
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:32.100252  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.100812  185942 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:15:32.100831  185942 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:15:32.100842  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.101552  185942 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:15:32.101568  185942 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:15:32.101602  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.101629  185942 main.go:141] libmachine: (embed-certs-709250) DBG | skip adding static IP to network mk-embed-certs-709250 - found existing host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"}
	I1028 12:15:32.101644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:15:32.104041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.104356  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104459  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:15:32.104488  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:15:32.104519  185942 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:32.104530  185942 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:15:32.104538  185942 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:15:32.233966  185942 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:32.234363  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:15:32.235010  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.237443  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.237755  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.237783  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.238040  185942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:15:32.238272  185942 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:32.238291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:32.238541  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.240765  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241165  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.241212  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241333  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.241520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241704  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241836  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.241989  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.242234  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.242247  185942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:32.358412  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:32.358443  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.358773  185942 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:15:32.358810  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.359027  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.361776  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362122  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.362161  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362262  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.362429  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362579  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362709  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.362867  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.363084  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.363098  185942 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:15:32.492437  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:15:32.492466  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.495108  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495394  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.495438  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495586  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.495771  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.495927  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.496054  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.496215  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.496399  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.496416  185942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:32.619038  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:32.619074  185942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:32.619113  185942 buildroot.go:174] setting up certificates
	I1028 12:15:32.619125  185942 provision.go:84] configureAuth start
	I1028 12:15:32.619137  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.619451  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.622055  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622448  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.622479  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622593  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.624610  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625037  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.625066  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625086  185942 provision.go:143] copyHostCerts
	I1028 12:15:32.625174  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:32.625190  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:32.625259  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:32.625396  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:32.625407  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:32.625444  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:32.625519  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:32.625541  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:32.625575  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:32.625645  185942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:15:32.684483  185942 provision.go:177] copyRemoteCerts
	I1028 12:15:32.684547  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:32.684576  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.686926  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687244  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.687284  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687427  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.687617  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.687744  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.687890  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:32.776282  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:32.802180  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:15:32.829609  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:32.854274  185942 provision.go:87] duration metric: took 235.133526ms to configureAuth
	I1028 12:15:32.854305  185942 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:32.854474  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:15:32.854547  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.857363  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.857736  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.857771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.858038  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.858251  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858442  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858652  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.858809  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.858979  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.858996  185942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:33.101841  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:33.101870  185942 machine.go:96] duration metric: took 863.584969ms to provisionDockerMachine
	I1028 12:15:33.101883  185942 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:15:33.101896  185942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:33.101911  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.102249  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:33.102285  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.105023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.105357  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105493  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.105710  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.105881  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.106032  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.193225  185942 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:33.197548  185942 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:33.197570  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:33.197637  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:33.197739  185942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:33.197861  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:33.207962  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:33.231808  185942 start.go:296] duration metric: took 129.908529ms for postStartSetup
	I1028 12:15:33.231853  185942 fix.go:56] duration metric: took 19.405472942s for fixHost
	I1028 12:15:33.231875  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.234609  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.234943  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.234979  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.235167  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.235370  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235642  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.235806  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:33.236026  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:33.236041  185942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:33.350639  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117733.322211717
	
	I1028 12:15:33.350663  185942 fix.go:216] guest clock: 1730117733.322211717
	I1028 12:15:33.350673  185942 fix.go:229] Guest: 2024-10-28 12:15:33.322211717 +0000 UTC Remote: 2024-10-28 12:15:33.231858201 +0000 UTC m=+237.345598419 (delta=90.353516ms)
	I1028 12:15:33.350707  185942 fix.go:200] guest clock delta is within tolerance: 90.353516ms
	I1028 12:15:33.350714  185942 start.go:83] releasing machines lock for "embed-certs-709250", held for 19.524379046s
	I1028 12:15:33.350737  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.350974  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:33.353647  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354012  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.354041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354244  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354690  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354873  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354973  185942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:33.355017  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.355090  185942 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:33.355116  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.357679  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358050  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358074  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358242  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358389  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.358542  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.358584  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358616  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358681  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.358721  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358892  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.359048  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.359197  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.443468  185942 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:33.498501  185942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:33.642221  185942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:33.649269  185942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:33.649336  185942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:33.665990  185942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:33.666023  185942 start.go:495] detecting cgroup driver to use...
	I1028 12:15:33.666103  185942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:33.683188  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:33.699441  185942 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:33.699506  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:33.714192  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:33.728325  185942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:33.850801  185942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:34.028929  185942 docker.go:233] disabling docker service ...
	I1028 12:15:34.028991  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:34.045600  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:34.059450  185942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:34.182310  185942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:34.305346  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:34.322354  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:34.342738  185942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:15:34.342804  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.354622  185942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:34.354687  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.365663  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.376503  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.388360  185942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:34.399960  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.419087  185942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.439700  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.451425  185942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:34.461657  185942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:34.461710  185942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:34.476292  185942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:34.487186  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:34.614984  185942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:34.709983  185942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:34.710061  185942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:34.715204  185942 start.go:563] Will wait 60s for crictl version
	I1028 12:15:34.715268  185942 ssh_runner.go:195] Run: which crictl
	I1028 12:15:34.719459  185942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:34.760330  185942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:34.760407  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.788635  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.820113  185942 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:15:34.821282  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:34.824384  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.824719  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:34.824746  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.825032  185942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:34.829502  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:34.842695  185942 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:34.842845  185942 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:15:34.842897  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:34.881154  185942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:15:34.881218  185942 ssh_runner.go:195] Run: which lz4
	I1028 12:15:34.885630  185942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:34.890045  185942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:34.890075  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:36.448704  185942 crio.go:462] duration metric: took 1.563096609s to copy over tarball
	I1028 12:15:36.448792  185942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:38.703177  185942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25435315s)
	I1028 12:15:38.703207  185942 crio.go:469] duration metric: took 2.254465841s to extract the tarball
	I1028 12:15:38.703217  185942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:38.741005  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:38.788350  185942 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:15:38.788376  185942 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:15:38.788383  185942 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:15:38.788491  185942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:15:38.788558  185942 ssh_runner.go:195] Run: crio config
	I1028 12:15:38.835642  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:38.835667  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:38.835678  185942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:15:38.835701  185942 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:15:38.835822  185942 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:15:38.835879  185942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:15:38.846832  185942 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:15:38.846925  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:15:38.857103  185942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:15:38.874531  185942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:15:38.892213  185942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:15:38.910949  185942 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:15:38.915391  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:38.928874  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:39.045969  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:15:39.063425  185942 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:15:39.063449  185942 certs.go:194] generating shared ca certs ...
	I1028 12:15:39.063465  185942 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:15:39.063638  185942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:15:39.063693  185942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:15:39.063709  185942 certs.go:256] generating profile certs ...
	I1028 12:15:39.063810  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:15:39.063893  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:15:39.063951  185942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:15:39.064107  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:15:39.064153  185942 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:15:39.064167  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:15:39.064202  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:15:39.064239  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:15:39.064272  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:15:39.064335  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:39.064972  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:15:39.103261  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:15:39.145102  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:15:39.175151  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:15:39.205220  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:15:39.236045  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:15:39.273622  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:15:39.299336  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:15:39.325277  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:15:39.349878  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:15:39.374466  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:15:39.398920  185942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:15:39.416280  185942 ssh_runner.go:195] Run: openssl version
	I1028 12:15:39.422478  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:15:39.434671  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439581  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439635  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.445736  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:15:39.457128  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:15:39.468602  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473229  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473306  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.479063  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:15:39.490370  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:15:39.501843  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506514  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506579  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.512633  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:15:39.524115  185942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:15:39.528804  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:15:39.534982  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:15:39.541214  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:15:39.547734  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:15:39.554143  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:15:39.560719  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:15:39.567076  185942 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:15:39.567173  185942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:15:39.567226  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.611567  185942 cri.go:89] found id: ""
	I1028 12:15:39.611644  185942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:15:39.622561  185942 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:15:39.622583  185942 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:15:39.622637  185942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:15:39.632757  185942 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:15:39.633873  185942 kubeconfig.go:125] found "embed-certs-709250" server: "https://192.168.39.211:8443"
	I1028 12:15:39.635943  185942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:15:39.646060  185942 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1028 12:15:39.646104  185942 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:15:39.646119  185942 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:15:39.646177  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.686806  185942 cri.go:89] found id: ""
	I1028 12:15:39.686891  185942 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:15:39.703935  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:15:39.714319  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:15:39.714341  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:15:39.714389  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:15:39.725383  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:15:39.725452  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:15:39.737075  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:15:39.748226  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:15:39.748311  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:15:39.760111  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.770287  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:15:39.770365  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.780776  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:15:39.790412  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:15:39.790475  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:15:39.800727  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:15:39.811331  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:39.926791  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:41.043020  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11617977s)
	I1028 12:15:41.043082  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.246311  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.309073  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.392313  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:15:41.392425  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:41.893601  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.393518  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.444753  185942 api_server.go:72] duration metric: took 1.052438751s to wait for apiserver process to appear ...
	I1028 12:15:42.444794  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:15:42.444821  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.214786  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.214821  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.214837  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.252422  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.252458  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.445825  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.451454  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.451549  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:45.945668  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.956623  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.956667  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.445240  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.450197  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:46.450223  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.945901  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.950302  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:15:46.956218  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:15:46.956245  185942 api_server.go:131] duration metric: took 4.511443878s to wait for apiserver health ...
	I1028 12:15:46.956254  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:46.956260  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:46.958294  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:46.959738  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:15:46.970473  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:15:46.994129  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:15:47.003807  185942 system_pods.go:59] 8 kube-system pods found
	I1028 12:15:47.003843  185942 system_pods.go:61] "coredns-7c65d6cfc9-j66cd" [d53b2839-00f6-4ccc-833d-76424b3efdba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:15:47.003851  185942 system_pods.go:61] "etcd-embed-certs-709250" [24761127-dde4-4f5d-b7cf-a13e37366e0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:15:47.003858  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [17996153-32c3-41e0-be90-fc9e058e0080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:15:47.003864  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [4ce37c00-1015-45f8-b847-1ca92cdf3a31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:15:47.003871  185942 system_pods.go:61] "kube-proxy-dl7xq" [a06ed5ff-b1e9-42c7-ba26-f120bb03ccb6] Running
	I1028 12:15:47.003877  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [c76e654e-a7fc-4891-8e73-bd74f9178c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:15:47.003883  185942 system_pods.go:61] "metrics-server-6867b74b74-k69kz" [568d5308-3f66-459b-b5c8-594d9400b6c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:15:47.003886  185942 system_pods.go:61] "storage-provisioner" [6552cef1-21b6-4306-a3e2-ff16793257dc] Running
	I1028 12:15:47.003893  185942 system_pods.go:74] duration metric: took 9.734271ms to wait for pod list to return data ...
	I1028 12:15:47.003900  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:15:47.008428  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:15:47.008465  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:15:47.008479  185942 node_conditions.go:105] duration metric: took 4.573275ms to run NodePressure ...
	I1028 12:15:47.008504  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:47.285509  185942 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291045  185942 kubeadm.go:739] kubelet initialised
	I1028 12:15:47.291069  185942 kubeadm.go:740] duration metric: took 5.521713ms waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291078  185942 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:15:47.299072  185942 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:49.312365  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:50.804953  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:50.804976  185942 pod_ready.go:82] duration metric: took 3.505873868s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:50.804986  185942 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378422  186547 start.go:364] duration metric: took 2m50.816229865s to acquireMachinesLock for "default-k8s-diff-port-349222"
	I1028 12:15:53.378482  186547 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:53.378491  186547 fix.go:54] fixHost starting: 
	I1028 12:15:53.378917  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:53.378971  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:53.395967  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I1028 12:15:53.396434  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:53.396923  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:15:53.396950  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:53.397332  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:53.397552  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:15:53.397726  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:15:53.399287  186547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349222: state=Stopped err=<nil>
	I1028 12:15:53.399337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	W1028 12:15:53.399468  186547 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:53.401446  186547 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-349222" ...
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:52.838774  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.811974  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:53.811997  185942 pod_ready.go:82] duration metric: took 3.00700476s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:53.812008  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:55.824400  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.402920  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Start
	I1028 12:15:53.403172  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring networks are active...
	I1028 12:15:53.403912  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network default is active
	I1028 12:15:53.404195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network mk-default-k8s-diff-port-349222 is active
	I1028 12:15:53.404615  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Getting domain xml...
	I1028 12:15:53.405554  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Creating domain...
	I1028 12:15:54.734540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting to get IP...
	I1028 12:15:54.735417  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735784  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735880  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:54.735759  187305 retry.go:31] will retry after 268.036011ms: waiting for machine to come up
	I1028 12:15:55.005376  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.005999  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.006032  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.005930  187305 retry.go:31] will retry after 255.477665ms: waiting for machine to come up
	I1028 12:15:55.263500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264118  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264153  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.264087  187305 retry.go:31] will retry after 354.942061ms: waiting for machine to come up
	I1028 12:15:55.620877  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621664  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621698  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.621610  187305 retry.go:31] will retry after 569.620755ms: waiting for machine to come up
	I1028 12:15:56.192393  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192872  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.192803  187305 retry.go:31] will retry after 703.637263ms: waiting for machine to come up
	I1028 12:15:56.897762  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898304  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.898214  187305 retry.go:31] will retry after 713.628482ms: waiting for machine to come up
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:58.321600  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:00.018244  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.018285  185942 pod_ready.go:82] duration metric: took 6.206271068s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.018297  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028610  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.028638  185942 pod_ready.go:82] duration metric: took 10.334289ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028653  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041057  185942 pod_ready.go:93] pod "kube-proxy-dl7xq" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.041091  185942 pod_ready.go:82] duration metric: took 12.429027ms for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041106  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049617  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.049645  185942 pod_ready.go:82] duration metric: took 8.529436ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049659  185942 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:57.613338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613844  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613873  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:57.613796  187305 retry.go:31] will retry after 1.188479203s: waiting for machine to come up
	I1028 12:15:58.803300  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803690  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803724  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:58.803650  187305 retry.go:31] will retry after 1.439057212s: waiting for machine to come up
	I1028 12:16:00.244665  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245201  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245239  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:00.245141  187305 retry.go:31] will retry after 1.842038011s: waiting for machine to come up
	I1028 12:16:02.090283  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090879  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:02.090828  187305 retry.go:31] will retry after 1.556155538s: waiting for machine to come up
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.165378  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:04.555857  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:03.648337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648760  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:03.648736  187305 retry.go:31] will retry after 2.586516153s: waiting for machine to come up
	I1028 12:16:06.236934  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237402  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237433  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:06.237352  187305 retry.go:31] will retry after 3.507901898s: waiting for machine to come up
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.557581  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.056276  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.746980  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747449  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747482  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:09.747401  187305 retry.go:31] will retry after 4.499585546s: waiting for machine to come up
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.487114  185546 start.go:364] duration metric: took 56.6590668s to acquireMachinesLock for "no-preload-871884"
	I1028 12:16:15.487176  185546 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:16:15.487190  185546 fix.go:54] fixHost starting: 
	I1028 12:16:15.487650  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:16:15.487713  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:16:15.508857  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1028 12:16:15.509318  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:16:15.510000  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:16:15.510037  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:16:15.510385  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:16:15.510599  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:15.510779  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:16:15.512738  185546 fix.go:112] recreateIfNeeded on no-preload-871884: state=Stopped err=<nil>
	I1028 12:16:15.512772  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	W1028 12:16:15.512963  185546 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:16:15.514890  185546 out.go:177] * Restarting existing kvm2 VM for "no-preload-871884" ...
	I1028 12:16:11.056427  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:13.058549  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.556621  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.516551  185546 main.go:141] libmachine: (no-preload-871884) Calling .Start
	I1028 12:16:15.516786  185546 main.go:141] libmachine: (no-preload-871884) Ensuring networks are active...
	I1028 12:16:15.517934  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network default is active
	I1028 12:16:15.518543  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network mk-no-preload-871884 is active
	I1028 12:16:15.519028  185546 main.go:141] libmachine: (no-preload-871884) Getting domain xml...
	I1028 12:16:15.519878  185546 main.go:141] libmachine: (no-preload-871884) Creating domain...
	I1028 12:16:14.249128  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249645  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has current primary IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249674  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Found IP for machine: 192.168.50.75
	I1028 12:16:14.249689  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserving static IP address...
	I1028 12:16:14.250120  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserved static IP address: 192.168.50.75
	I1028 12:16:14.250139  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for SSH to be available...
	I1028 12:16:14.250164  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.250205  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | skip adding static IP to network mk-default-k8s-diff-port-349222 - found existing host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"}
	I1028 12:16:14.250222  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Getting to WaitForSSH function...
	I1028 12:16:14.252540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.252883  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.252926  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.253035  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH client type: external
	I1028 12:16:14.253075  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa (-rw-------)
	I1028 12:16:14.253100  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:14.253115  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | About to run SSH command:
	I1028 12:16:14.253129  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | exit 0
	I1028 12:16:14.373688  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:14.374101  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetConfigRaw
	I1028 12:16:14.374713  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.377338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.377824  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.377857  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.378094  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:16:14.378326  186547 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:14.378345  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:14.378556  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.380694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.380976  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.380992  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.381143  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.381356  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381521  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381678  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.381882  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.382107  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.382119  186547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:14.490030  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:14.490061  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490303  186547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-349222"
	I1028 12:16:14.490331  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490523  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.492989  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493395  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.493426  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493626  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.493794  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.493960  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.494104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.494258  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.494427  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.494439  186547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-349222 && echo "default-k8s-diff-port-349222" | sudo tee /etc/hostname
	I1028 12:16:14.604373  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-349222
	
	I1028 12:16:14.604405  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.607135  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607437  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.607465  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.607891  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608060  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608187  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.608353  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.608549  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.608569  186547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-349222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-349222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-349222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:14.714933  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:14.714963  186547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:14.714990  186547 buildroot.go:174] setting up certificates
	I1028 12:16:14.714998  186547 provision.go:84] configureAuth start
	I1028 12:16:14.715007  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.715321  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.718051  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.718406  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718504  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.720638  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.720945  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.720972  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.721127  186547 provision.go:143] copyHostCerts
	I1028 12:16:14.721198  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:14.721213  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:14.721283  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:14.721407  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:14.721417  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:14.721446  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:14.721522  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:14.721544  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:14.721571  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:14.721634  186547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-349222 san=[127.0.0.1 192.168.50.75 default-k8s-diff-port-349222 localhost minikube]
	I1028 12:16:14.854227  186547 provision.go:177] copyRemoteCerts
	I1028 12:16:14.854285  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:14.854314  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.857250  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857590  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.857620  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857897  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.858091  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.858286  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.858434  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:14.940752  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:14.967575  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 12:16:14.992693  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:16:15.017801  186547 provision.go:87] duration metric: took 302.790563ms to configureAuth
	I1028 12:16:15.017831  186547 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:15.018073  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:15.018168  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.021181  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.021574  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021719  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.021894  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022113  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022317  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.022564  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.022744  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.022761  186547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:15.257308  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:15.257339  186547 machine.go:96] duration metric: took 878.998573ms to provisionDockerMachine
	I1028 12:16:15.257350  186547 start.go:293] postStartSetup for "default-k8s-diff-port-349222" (driver="kvm2")
	I1028 12:16:15.257360  186547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:15.257378  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.257695  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:15.257721  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.260380  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260767  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.260795  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260990  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.261186  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.261370  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.261513  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.341376  186547 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:15.345736  186547 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:15.345760  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:15.345820  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:15.345891  186547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:15.345978  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:15.355662  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:15.381750  186547 start.go:296] duration metric: took 124.385481ms for postStartSetup
	I1028 12:16:15.381788  186547 fix.go:56] duration metric: took 22.00329785s for fixHost
	I1028 12:16:15.381807  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.384756  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385099  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.385130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385359  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.385587  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385782  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385974  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.386165  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.386345  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.386355  186547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:15.486905  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117775.445749296
	
	I1028 12:16:15.486934  186547 fix.go:216] guest clock: 1730117775.445749296
	I1028 12:16:15.486944  186547 fix.go:229] Guest: 2024-10-28 12:16:15.445749296 +0000 UTC Remote: 2024-10-28 12:16:15.381791731 +0000 UTC m=+192.967058764 (delta=63.957565ms)
	I1028 12:16:15.487005  186547 fix.go:200] guest clock delta is within tolerance: 63.957565ms
	I1028 12:16:15.487018  186547 start.go:83] releasing machines lock for "default-k8s-diff-port-349222", held for 22.108560462s
	I1028 12:16:15.487082  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.487382  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:15.490840  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491343  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.491374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491528  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492208  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492431  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492581  186547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:15.492657  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.492706  186547 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:15.492746  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.496062  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496119  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496544  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496901  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497225  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497257  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497458  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497583  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497665  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.497798  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497977  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.590741  186547 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:15.615347  186547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:15.762979  186547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:15.770132  186547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:15.770221  186547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:15.788651  186547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:15.788684  186547 start.go:495] detecting cgroup driver to use...
	I1028 12:16:15.788751  186547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:15.806118  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:15.820916  186547 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:15.820986  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:15.835770  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:15.850994  186547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:15.979465  186547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:16.160837  186547 docker.go:233] disabling docker service ...
	I1028 12:16:16.160924  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:16.177934  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:16.194616  186547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:16.320605  186547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:16.464175  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:16.479626  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:16.502747  186547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:16.502889  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.514636  186547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:16.514695  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.528137  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.539961  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.552263  186547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:16.566275  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.578632  186547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.599084  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.611250  186547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:16.621976  186547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:16.622052  186547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:16.640800  186547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:16.651767  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:16.806628  186547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:16.903584  186547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:16.903655  186547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:16.909873  186547 start.go:563] Will wait 60s for crictl version
	I1028 12:16:16.909950  186547 ssh_runner.go:195] Run: which crictl
	I1028 12:16:16.915388  186547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:16.964424  186547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:16.964517  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:16.997415  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:17.032323  186547 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:17.033747  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:17.036500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.036903  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:17.036935  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.037118  186547 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:17.041698  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:17.056649  186547 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:17.056792  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:17.056840  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:17.099143  186547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:17.099233  186547 ssh_runner.go:195] Run: which lz4
	I1028 12:16:17.103882  186547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:16:17.108660  186547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:16:17.108699  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.559178  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:19.560739  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:16.842086  185546 main.go:141] libmachine: (no-preload-871884) Waiting to get IP...
	I1028 12:16:16.843056  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:16.843514  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:16.843599  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:16.843484  187500 retry.go:31] will retry after 240.188984ms: waiting for machine to come up
	I1028 12:16:17.085193  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.085702  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.085739  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.085649  187500 retry.go:31] will retry after 361.44193ms: waiting for machine to come up
	I1028 12:16:17.448961  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.449619  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.449645  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.449576  187500 retry.go:31] will retry after 386.179326ms: waiting for machine to come up
	I1028 12:16:17.837097  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.837879  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.837907  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.837834  187500 retry.go:31] will retry after 531.12665ms: waiting for machine to come up
	I1028 12:16:18.370266  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:18.370803  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:18.370834  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:18.370746  187500 retry.go:31] will retry after 760.20134ms: waiting for machine to come up
	I1028 12:16:19.132853  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.133415  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.133444  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.133360  187500 retry.go:31] will retry after 817.773678ms: waiting for machine to come up
	I1028 12:16:19.952317  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.952800  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.952824  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.952760  187500 retry.go:31] will retry after 861.798605ms: waiting for machine to come up
	I1028 12:16:20.816156  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:20.816794  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:20.816826  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:20.816750  187500 retry.go:31] will retry after 908.062214ms: waiting for machine to come up
	I1028 12:16:18.686980  186547 crio.go:462] duration metric: took 1.583134893s to copy over tarball
	I1028 12:16:18.687053  186547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:16:21.016264  186547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329174428s)
	I1028 12:16:21.016309  186547 crio.go:469] duration metric: took 2.329300291s to extract the tarball
	I1028 12:16:21.016322  186547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:16:21.053950  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:21.112876  186547 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:16:21.112903  186547 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:16:21.112914  186547 kubeadm.go:934] updating node { 192.168.50.75 8444 v1.31.2 crio true true} ...
	I1028 12:16:21.113037  186547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-349222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:21.113119  186547 ssh_runner.go:195] Run: crio config
	I1028 12:16:21.179853  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:21.179877  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:21.179888  186547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:21.179907  186547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-349222 NodeName:default-k8s-diff-port-349222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:21.180039  186547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-349222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.75"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:21.180117  186547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:21.191650  186547 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:21.191721  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:21.201670  186547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 12:16:21.220426  186547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:21.240774  186547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:16:21.263336  186547 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:21.267818  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:21.281577  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:21.441517  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:21.464117  186547 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222 for IP: 192.168.50.75
	I1028 12:16:21.464145  186547 certs.go:194] generating shared ca certs ...
	I1028 12:16:21.464167  186547 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:21.464392  186547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:21.464458  186547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:21.464485  186547 certs.go:256] generating profile certs ...
	I1028 12:16:21.464599  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.key
	I1028 12:16:21.464691  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key.e54e33e0
	I1028 12:16:21.464749  186547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key
	I1028 12:16:21.464919  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:21.464967  186547 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:21.464981  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:21.465006  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:21.465031  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:21.465069  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:21.465124  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:21.465976  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:21.511145  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:21.572071  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:21.613442  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:21.655508  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:16:21.687378  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:16:21.713227  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:21.738909  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:21.765274  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:21.792427  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:21.817632  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:21.842996  186547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:21.861059  186547 ssh_runner.go:195] Run: openssl version
	I1028 12:16:21.867814  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:21.880769  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886245  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886325  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.893179  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:21.908974  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:21.926992  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932350  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932428  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.939073  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:21.952302  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:21.965485  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971486  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971564  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.978531  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:21.995399  186547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:22.001453  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:22.009449  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:22.016898  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:22.024410  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:22.033151  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:22.040981  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:22.048298  186547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:22.048441  186547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:22.048531  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.095210  186547 cri.go:89] found id: ""
	I1028 12:16:22.095319  186547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:22.111740  186547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:22.111772  186547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:22.111828  186547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:22.122472  186547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:22.123648  186547 kubeconfig.go:125] found "default-k8s-diff-port-349222" server: "https://192.168.50.75:8444"
	I1028 12:16:22.126117  186547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:22.137057  186547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.75
	I1028 12:16:22.137096  186547 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:22.137108  186547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:22.137179  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.180526  186547 cri.go:89] found id: ""
	I1028 12:16:22.180638  186547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:22.197697  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:22.208176  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:22.208197  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:22.208246  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:16:22.218379  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:22.218438  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:22.228844  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:16:22.239330  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:22.239407  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:22.250200  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.260309  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:22.260374  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.271041  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:16:22.281556  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:22.281637  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:22.294003  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:22.305123  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:22.426791  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.058068  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:24.059822  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:21.726767  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:21.727332  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:21.727373  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:21.727224  187500 retry.go:31] will retry after 1.684184533s: waiting for machine to come up
	I1028 12:16:23.412691  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:23.413228  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:23.413254  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:23.413177  187500 retry.go:31] will retry after 1.416062445s: waiting for machine to come up
	I1028 12:16:24.830846  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:24.831450  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:24.831480  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:24.831393  187500 retry.go:31] will retry after 2.716897952s: waiting for machine to come up
	I1028 12:16:23.288371  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.506229  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.575063  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.644776  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:23.644896  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.145579  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.645050  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.666456  186547 api_server.go:72] duration metric: took 1.021679294s to wait for apiserver process to appear ...
	I1028 12:16:24.666493  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:24.666518  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:24.667086  186547 api_server.go:269] stopped: https://192.168.50.75:8444/healthz: Get "https://192.168.50.75:8444/healthz": dial tcp 192.168.50.75:8444: connect: connection refused
	I1028 12:16:25.166765  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.336957  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.337000  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.337015  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.382075  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.382110  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.667083  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.671910  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:28.671935  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.167591  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.173364  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:29.173397  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.666902  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.672205  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:16:29.679964  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:16:29.680002  186547 api_server.go:131] duration metric: took 5.013500479s to wait for apiserver health ...
	I1028 12:16:29.680014  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:29.680032  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:29.681992  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:26.558629  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.560116  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:27.550893  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:27.551454  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:27.551476  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:27.551438  187500 retry.go:31] will retry after 2.986712877s: waiting for machine to come up
	I1028 12:16:30.539999  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:30.540601  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:30.540632  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:30.540526  187500 retry.go:31] will retry after 3.947007446s: waiting for machine to come up
	I1028 12:16:29.683325  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:16:29.697362  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:16:29.717296  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:16:29.726327  186547 system_pods.go:59] 8 kube-system pods found
	I1028 12:16:29.726363  186547 system_pods.go:61] "coredns-7c65d6cfc9-k5h7n" [e203fcce-1a8a-431b-a816-d75b33ca9417] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:16:29.726374  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [2214daee-0302-44cd-9297-836eeb011232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:16:29.726391  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [c4331c24-07e2-4b50-ab04-31bcd00960e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:16:29.726402  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [9dddd9fb-ad03-4771-af1b-d9e1e024af52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:16:29.726413  186547 system_pods.go:61] "kube-proxy-bqq65" [ed5d0c14-0ddb-4446-a2f7-ae457d629fb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:16:29.726423  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [9cfcc366-038f-43a9-b919-48742fa419af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:16:29.726434  186547 system_pods.go:61] "metrics-server-6867b74b74-cgkz9" [3d919412-efb8-4030-a5d0-3c325c824c48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:16:29.726445  186547 system_pods.go:61] "storage-provisioner" [613b003c-1eee-4294-947f-ea7a21edc8d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:16:29.726464  186547 system_pods.go:74] duration metric: took 9.135782ms to wait for pod list to return data ...
	I1028 12:16:29.726478  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:16:29.729971  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:16:29.729996  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:16:29.730009  186547 node_conditions.go:105] duration metric: took 3.525858ms to run NodePressure ...
	I1028 12:16:29.730035  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:30.043775  186547 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048614  186547 kubeadm.go:739] kubelet initialised
	I1028 12:16:30.048638  186547 kubeadm.go:740] duration metric: took 4.83853ms waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048647  186547 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:16:30.053908  186547 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:32.063283  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.057577  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:33.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:35.557338  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:34.491658  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492175  185546 main.go:141] libmachine: (no-preload-871884) Found IP for machine: 192.168.72.156
	I1028 12:16:34.492197  185546 main.go:141] libmachine: (no-preload-871884) Reserving static IP address...
	I1028 12:16:34.492215  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has current primary IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492674  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.492704  185546 main.go:141] libmachine: (no-preload-871884) Reserved static IP address: 192.168.72.156
	I1028 12:16:34.492739  185546 main.go:141] libmachine: (no-preload-871884) DBG | skip adding static IP to network mk-no-preload-871884 - found existing host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"}
	I1028 12:16:34.492763  185546 main.go:141] libmachine: (no-preload-871884) DBG | Getting to WaitForSSH function...
	I1028 12:16:34.492777  185546 main.go:141] libmachine: (no-preload-871884) Waiting for SSH to be available...
	I1028 12:16:34.495176  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495502  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.495536  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495682  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH client type: external
	I1028 12:16:34.495714  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa (-rw-------)
	I1028 12:16:34.495747  185546 main.go:141] libmachine: (no-preload-871884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:34.495770  185546 main.go:141] libmachine: (no-preload-871884) DBG | About to run SSH command:
	I1028 12:16:34.495796  185546 main.go:141] libmachine: (no-preload-871884) DBG | exit 0
	I1028 12:16:34.625650  185546 main.go:141] libmachine: (no-preload-871884) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:34.625959  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetConfigRaw
	I1028 12:16:34.626602  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.629137  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629442  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.629477  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629733  185546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:16:34.629938  185546 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:34.629957  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:34.630153  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.632415  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.632777  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.632804  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.633033  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.633247  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633422  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633592  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.633762  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.633954  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.633968  185546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:34.738368  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:34.738406  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738696  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:16:34.738729  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738926  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.741750  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742216  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.742322  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742339  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.742538  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742689  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742857  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.743032  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.743248  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.743266  185546 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-871884 && echo "no-preload-871884" | sudo tee /etc/hostname
	I1028 12:16:34.863767  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-871884
	
	I1028 12:16:34.863802  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.867136  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867530  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.867561  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867822  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.868039  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868251  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868430  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.868634  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.868880  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.868905  185546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-871884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-871884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-871884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:34.989420  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:34.989450  185546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:34.989468  185546 buildroot.go:174] setting up certificates
	I1028 12:16:34.989476  185546 provision.go:84] configureAuth start
	I1028 12:16:34.989485  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.989790  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.992627  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.992977  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.993007  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.993225  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.995586  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.995888  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.995911  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.996122  185546 provision.go:143] copyHostCerts
	I1028 12:16:34.996190  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:34.996204  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:34.996261  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:34.996375  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:34.996384  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:34.996408  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:34.996472  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:34.996482  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:34.996499  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:34.996559  185546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.no-preload-871884 san=[127.0.0.1 192.168.72.156 localhost minikube no-preload-871884]
	I1028 12:16:35.437900  185546 provision.go:177] copyRemoteCerts
	I1028 12:16:35.437961  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:35.437985  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.440936  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441329  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.441361  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441555  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.441756  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.441921  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.442085  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.524911  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:35.554631  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:16:35.586946  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:16:35.620121  185546 provision.go:87] duration metric: took 630.630531ms to configureAuth
	I1028 12:16:35.620155  185546 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:35.620395  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:35.620502  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.623316  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623607  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.623643  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623886  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.624099  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624290  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624433  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.624612  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:35.624794  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:35.624810  185546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:35.886145  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:35.886178  185546 machine.go:96] duration metric: took 1.256224912s to provisionDockerMachine
	I1028 12:16:35.886196  185546 start.go:293] postStartSetup for "no-preload-871884" (driver="kvm2")
	I1028 12:16:35.886209  185546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:35.886232  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:35.886615  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:35.886653  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.889615  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890016  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.890048  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.890459  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.890654  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.890798  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.977889  185546 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:35.983360  185546 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:35.983387  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:35.983454  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:35.983543  185546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:35.983674  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:35.997400  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:36.025665  185546 start.go:296] duration metric: took 139.454088ms for postStartSetup
	I1028 12:16:36.025714  185546 fix.go:56] duration metric: took 20.538525254s for fixHost
	I1028 12:16:36.025739  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.028490  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.028933  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.028964  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.029170  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.029386  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029573  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029734  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.029909  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:36.030087  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:36.030098  185546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:36.138559  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117796.101397993
	
	I1028 12:16:36.138589  185546 fix.go:216] guest clock: 1730117796.101397993
	I1028 12:16:36.138599  185546 fix.go:229] Guest: 2024-10-28 12:16:36.101397993 +0000 UTC Remote: 2024-10-28 12:16:36.025719388 +0000 UTC m=+359.787107454 (delta=75.678605ms)
	I1028 12:16:36.138633  185546 fix.go:200] guest clock delta is within tolerance: 75.678605ms
	I1028 12:16:36.138638  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 20.651488254s
	I1028 12:16:36.138663  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.138953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:36.141711  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142144  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.142180  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142323  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.142975  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143165  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143240  185546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:36.143306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.143378  185546 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:36.143399  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.145980  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146166  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146348  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146375  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146507  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146617  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146657  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146701  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.146795  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146882  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.146953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.147013  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.147071  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.147202  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.223364  185546 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:36.246964  185546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:34.561016  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.564296  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.396734  185546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:36.403214  185546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:36.403298  185546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:36.421658  185546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:36.421695  185546 start.go:495] detecting cgroup driver to use...
	I1028 12:16:36.421772  185546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:36.441133  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:36.456750  185546 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:36.456806  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:36.473457  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:36.489210  185546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:36.621054  185546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:36.767341  185546 docker.go:233] disabling docker service ...
	I1028 12:16:36.767432  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:36.784655  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:36.799522  185546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:36.942312  185546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:37.066636  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:37.082284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:37.102462  185546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:37.102530  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.113687  185546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:37.113760  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.125624  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.137036  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.148417  185546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:37.160015  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.171382  185546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.192342  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.204353  185546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:37.215188  185546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:37.215275  185546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:37.230653  185546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:37.241484  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:37.382996  185546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:37.479263  185546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:37.479363  185546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:37.485265  185546 start.go:563] Will wait 60s for crictl version
	I1028 12:16:37.485330  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:37.489545  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:37.536126  185546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:37.536212  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.567538  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.600370  185546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.559329  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:40.057624  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:37.601686  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:37.604235  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604568  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:37.604601  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604782  185546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:37.609354  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:37.624966  185546 kubeadm.go:883] updating cluster {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:37.625081  185546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:37.625117  185546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:37.664112  185546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:37.664149  185546 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:16:37.664262  185546 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.664306  185546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.664334  185546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 12:16:37.664311  185546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.664352  185546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.664393  185546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.664434  185546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.664399  185546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666080  185546 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.666083  185546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.666081  185546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.666142  185546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.666085  185546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 12:16:37.666079  185546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.666185  185546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666398  185546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.840639  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.857089  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.859107  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 12:16:37.859358  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.863640  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.867925  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.876221  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.921581  185546 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 12:16:37.921638  185546 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.921689  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.042970  185546 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 12:16:38.043015  185546 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.043068  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093917  185546 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 12:16:38.093954  185546 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 12:16:38.093973  185546 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.093985  185546 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.094029  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094038  185546 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 12:16:38.094057  185546 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.094087  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.094094  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094030  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093976  185546 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 12:16:38.094143  185546 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.094152  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.094175  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.110134  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.110302  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.188922  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.188979  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.193920  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.193929  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.292698  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.325562  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.331855  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.332873  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.345880  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.345951  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.414842  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.470776  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.470949  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 12:16:38.471044  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.481197  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 12:16:38.481333  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:38.503147  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 12:16:38.503171  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:38.532884  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 12:16:38.533001  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:38.552405  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 12:16:38.552417  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 12:16:38.552472  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552485  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 12:16:38.552523  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:38.552529  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552552  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 12:16:38.552527  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 12:16:38.552598  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 12:16:38.829851  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127678  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.575124569s)
	I1028 12:16:41.127722  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 12:16:41.127744  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.575188461s)
	I1028 12:16:41.127775  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 12:16:41.127785  185546 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.297902587s)
	I1028 12:16:41.127803  185546 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127818  185546 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 12:16:41.127850  185546 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127858  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127895  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:39.064564  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:41.563643  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.058025  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:44.557594  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.190694  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062807881s)
	I1028 12:16:43.190736  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 12:16:43.190752  185546 ssh_runner.go:235] Completed: which crictl: (2.062836368s)
	I1028 12:16:43.190773  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:43.190827  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:43.190831  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:45.281583  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.090685426s)
	I1028 12:16:45.281620  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 12:16:45.281650  185546 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281679  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.090821035s)
	I1028 12:16:45.281698  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281750  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:45.325500  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:42.565395  186547 pod_ready.go:93] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.565425  186547 pod_ready.go:82] duration metric: took 12.511487215s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.565438  186547 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572364  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.572388  186547 pod_ready.go:82] duration metric: took 6.941356ms for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572402  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579074  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.579099  186547 pod_ready.go:82] duration metric: took 6.689137ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579116  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584088  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.584108  186547 pod_ready.go:82] duration metric: took 4.985095ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584118  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588810  186547 pod_ready.go:93] pod "kube-proxy-bqq65" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.588837  186547 pod_ready.go:82] duration metric: took 4.711896ms for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588849  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758349  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:43.758376  186547 pod_ready.go:82] duration metric: took 1.169519383s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758387  186547 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:45.766209  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.059150  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.556589  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.174287  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.84875195s)
	I1028 12:16:49.174340  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 12:16:49.174291  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.892568087s)
	I1028 12:16:49.174422  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 12:16:49.174427  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:49.174466  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:49.174524  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:48.265641  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:50.271513  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.557320  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.557540  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:51.438821  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.26426785s)
	I1028 12:16:51.438857  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 12:16:51.438890  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264449757s)
	I1028 12:16:51.438893  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:51.438911  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 12:16:51.438945  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:52.890902  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451935078s)
	I1028 12:16:52.890933  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 12:16:52.890960  185546 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:52.891010  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:53.643145  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 12:16:53.643208  185546 cache_images.go:123] Successfully loaded all cached images
	I1028 12:16:53.643216  185546 cache_images.go:92] duration metric: took 15.979050279s to LoadCachedImages
	I1028 12:16:53.643231  185546 kubeadm.go:934] updating node { 192.168.72.156 8443 v1.31.2 crio true true} ...
	I1028 12:16:53.643393  185546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-871884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:53.643480  185546 ssh_runner.go:195] Run: crio config
	I1028 12:16:53.701778  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:16:53.701805  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:53.701814  185546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:53.701836  185546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.156 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-871884 NodeName:no-preload-871884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:53.701952  185546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-871884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:53.702019  185546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:53.714245  185546 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:53.714327  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:53.725610  185546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 12:16:53.745071  185546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:53.766897  185546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:16:53.787043  185546 ssh_runner.go:195] Run: grep 192.168.72.156	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:53.791580  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:53.805088  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:53.945235  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:53.964073  185546 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884 for IP: 192.168.72.156
	I1028 12:16:53.964099  185546 certs.go:194] generating shared ca certs ...
	I1028 12:16:53.964115  185546 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:53.964290  185546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:53.964338  185546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:53.964355  185546 certs.go:256] generating profile certs ...
	I1028 12:16:53.964458  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.key
	I1028 12:16:53.964533  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key.6934b48e
	I1028 12:16:53.964584  185546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key
	I1028 12:16:53.964719  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:53.964750  185546 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:53.964765  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:53.964801  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:53.964831  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:53.964866  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:53.964921  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:53.965632  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:54.004592  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:54.044270  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:54.079496  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:54.114473  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:16:54.141836  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:54.175201  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:54.202282  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:54.227874  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:54.254818  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:54.282950  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:54.310204  185546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:54.328834  185546 ssh_runner.go:195] Run: openssl version
	I1028 12:16:54.335391  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:54.347474  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352687  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352755  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.358834  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:54.373155  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:54.387035  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392179  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392281  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.398488  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:54.412352  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:54.426361  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431415  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431470  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.437583  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:54.450708  185546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:54.456625  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:54.463458  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:54.469939  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:54.477873  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:54.484962  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:54.491679  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:54.498106  185546 kubeadm.go:392] StartCluster: {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:54.498211  185546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:54.498287  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.543142  185546 cri.go:89] found id: ""
	I1028 12:16:54.543250  185546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:54.555948  185546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:54.555971  185546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:54.556021  185546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:54.566954  185546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:54.567990  185546 kubeconfig.go:125] found "no-preload-871884" server: "https://192.168.72.156:8443"
	I1028 12:16:54.570149  185546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:54.581005  185546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.156
	I1028 12:16:54.581039  185546 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:54.581051  185546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:54.581100  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.622676  185546 cri.go:89] found id: ""
	I1028 12:16:54.622742  185546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:54.642427  185546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:54.655104  185546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:54.655131  185546 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:54.655199  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:54.665367  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:54.665432  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:54.675664  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:54.685921  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:54.685997  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:54.698451  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.709982  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:54.710060  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.721243  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:54.731699  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:54.731780  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:54.743365  185546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:54.754284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:54.868055  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.645470  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.858805  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.940632  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:56.020654  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:56.020735  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.764963  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:54.766822  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.768500  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.058614  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.556085  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:00.556460  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.521589  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.021710  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.066266  185546 api_server.go:72] duration metric: took 1.045608096s to wait for apiserver process to appear ...
	I1028 12:16:57.066305  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:57.066326  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:16:57.066862  185546 api_server.go:269] stopped: https://192.168.72.156:8443/healthz: Get "https://192.168.72.156:8443/healthz": dial tcp 192.168.72.156:8443: connect: connection refused
	I1028 12:16:57.567124  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.159147  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.159179  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.159193  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.171505  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.171530  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.566560  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.570920  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:00.570947  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.066537  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.071173  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.071205  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.566517  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.577822  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.577851  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:02.066514  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:02.071117  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:17:02.078265  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:17:02.078293  185546 api_server.go:131] duration metric: took 5.011981306s to wait for apiserver health ...
	I1028 12:17:02.078302  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:17:02.078308  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:17:02.080348  185546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:59.267565  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:01.766399  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.081626  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:17:02.103809  185546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:17:02.135225  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:17:02.152051  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:17:02.152102  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:17:02.152113  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:17:02.152125  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:17:02.152133  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:17:02.152146  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:17:02.152159  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:17:02.152167  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:17:02.152174  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:17:02.152183  185546 system_pods.go:74] duration metric: took 16.930389ms to wait for pod list to return data ...
	I1028 12:17:02.152192  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:17:02.157475  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:17:02.157504  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:17:02.157515  185546 node_conditions.go:105] duration metric: took 5.317861ms to run NodePressure ...
	I1028 12:17:02.157548  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:17:02.476553  185546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482764  185546 kubeadm.go:739] kubelet initialised
	I1028 12:17:02.482789  185546 kubeadm.go:740] duration metric: took 6.205425ms waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482798  185546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:02.487480  185546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.495454  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495482  185546 pod_ready.go:82] duration metric: took 7.976331ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.495495  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495505  185546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.499904  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499931  185546 pod_ready.go:82] duration metric: took 4.41555ms for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.499941  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499948  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.504272  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504300  185546 pod_ready.go:82] duration metric: took 4.345522ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.504325  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504337  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.538786  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538826  185546 pod_ready.go:82] duration metric: took 34.474629ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.538841  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538851  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.939462  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939490  185546 pod_ready.go:82] duration metric: took 400.627739ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.939502  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939511  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.339338  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339369  185546 pod_ready.go:82] duration metric: took 399.848996ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.339384  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339394  185546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.739585  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739640  185546 pod_ready.go:82] duration metric: took 400.235271ms for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.739655  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739665  185546 pod_ready.go:39] duration metric: took 1.256859696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:03.739682  185546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:17:03.755064  185546 ops.go:34] apiserver oom_adj: -16
	I1028 12:17:03.755086  185546 kubeadm.go:597] duration metric: took 9.199108841s to restartPrimaryControlPlane
	I1028 12:17:03.755096  185546 kubeadm.go:394] duration metric: took 9.256999682s to StartCluster
	I1028 12:17:03.755111  185546 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.755175  185546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:17:03.757048  185546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.757327  185546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:17:03.757425  185546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:17:03.757535  185546 addons.go:69] Setting storage-provisioner=true in profile "no-preload-871884"
	I1028 12:17:03.757563  185546 addons.go:234] Setting addon storage-provisioner=true in "no-preload-871884"
	I1028 12:17:03.757565  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:17:03.757589  185546 addons.go:69] Setting metrics-server=true in profile "no-preload-871884"
	I1028 12:17:03.757617  185546 addons.go:234] Setting addon metrics-server=true in "no-preload-871884"
	I1028 12:17:03.757568  185546 addons.go:69] Setting default-storageclass=true in profile "no-preload-871884"
	W1028 12:17:03.757626  185546 addons.go:243] addon metrics-server should already be in state true
	I1028 12:17:03.757635  185546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-871884"
	W1028 12:17:03.757573  185546 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:17:03.757669  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.757713  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.758051  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758093  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758196  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758233  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758231  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758355  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.759378  185546 out.go:177] * Verifying Kubernetes components...
	I1028 12:17:03.761108  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:17:03.786180  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1028 12:17:03.786344  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1028 12:17:03.787005  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787096  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.787658  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.788034  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.789126  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.789149  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.789333  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.789366  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.790199  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.790591  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.793866  185546 addons.go:234] Setting addon default-storageclass=true in "no-preload-871884"
	W1028 12:17:03.793890  185546 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:17:03.793920  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.794332  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.794384  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.806461  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I1028 12:17:03.806960  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.807572  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1028 12:17:03.807644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.807835  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808074  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.808188  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.808349  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.808603  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.808624  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808993  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.809610  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.809665  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.810531  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.812676  185546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:17:03.813307  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1028 12:17:03.813821  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.814228  185546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:03.814248  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:17:03.814266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.814350  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.814373  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.814848  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.815284  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.815323  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.817336  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817751  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.817776  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817889  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.818079  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.818219  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.818357  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.830425  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1028 12:17:03.830940  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.831486  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.831507  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.831905  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.832125  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.834275  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.835260  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1028 12:17:03.835687  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.836180  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.836200  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.836527  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.836604  185546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:17:03.836741  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.838273  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:17:03.838290  185546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:17:03.838306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.838508  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.839044  185546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:03.839060  185546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:17:03.839080  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.842836  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843272  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.843291  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843461  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.843598  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.843767  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.843774  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843909  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.844312  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.844330  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.845228  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.845354  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.845474  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.845623  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.981979  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:17:04.003932  185546 node_ready.go:35] waiting up to 6m0s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:04.071389  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:04.169654  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:04.186781  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:17:04.186808  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:17:04.252889  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:17:04.252921  185546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:17:04.315140  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.315166  185546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:17:04.395995  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.489084  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489122  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489426  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.489445  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489470  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.489481  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489490  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489763  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489781  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.497272  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.497297  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.497647  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.497677  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.497702  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185405  185546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015712456s)
	I1028 12:17:05.185458  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185469  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.185749  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.185768  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185778  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185786  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.186142  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.186160  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.186149  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.294924  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.294953  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295282  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295301  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295319  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295329  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.295339  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295584  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295615  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295622  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295641  185546 addons.go:475] Verifying addon metrics-server=true in "no-preload-871884"
	I1028 12:17:05.297689  185546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 12:17:02.557465  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:04.557517  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:05.298945  185546 addons.go:510] duration metric: took 1.541528913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 12:17:06.008731  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.766439  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:06.267839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:06.558978  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:09.058557  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:08.507261  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:10.508790  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:11.007666  185546 node_ready.go:49] node "no-preload-871884" has status "Ready":"True"
	I1028 12:17:11.007698  185546 node_ready.go:38] duration metric: took 7.003728813s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:11.007710  185546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:11.014677  185546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020020  185546 pod_ready.go:93] pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:11.020042  185546 pod_ready.go:82] duration metric: took 5.339994ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020053  185546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:08.765053  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.766104  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:11.556955  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.560411  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.027402  185546 pod_ready.go:103] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:14.026501  185546 pod_ready.go:93] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.026537  185546 pod_ready.go:82] duration metric: took 3.006475793s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.026552  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036355  185546 pod_ready.go:93] pod "kube-apiserver-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.036379  185546 pod_ready.go:82] duration metric: took 9.819102ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036391  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042711  185546 pod_ready.go:93] pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.042734  185546 pod_ready.go:82] duration metric: took 6.336523ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042745  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047387  185546 pod_ready.go:93] pod "kube-proxy-6rc4l" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.047409  185546 pod_ready.go:82] duration metric: took 4.657388ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047422  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208217  185546 pod_ready.go:93] pod "kube-scheduler-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.208243  185546 pod_ready.go:82] duration metric: took 160.813834ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208254  185546 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:16.214834  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.268493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:15.271377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.056296  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.557898  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.215767  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:20.219613  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:17.764493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.766909  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:21.769560  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:17:21.057114  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:23.058470  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:25.556210  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:22.714692  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.717301  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.269572  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:26.765467  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:28.056563  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:30.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:27.215356  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.216505  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.267239  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.765267  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:33.055776  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.056525  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.714959  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:33.715292  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.715824  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:34.267155  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:36.765199  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:37.557428  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.057039  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:37.715996  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.213916  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.765841  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:41.267477  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:42.556504  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.557351  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:42.216159  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.216286  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:43.764776  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.265654  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:46.565643  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.057078  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.716667  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.216308  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:48.267160  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.764737  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:51.058399  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.556450  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:55.557043  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:51.715736  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.215689  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:52.764839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.766020  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:57.269346  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:57.557519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.057963  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:56.715168  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:58.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.215445  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:59.271418  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.766158  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:02.556743  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.055348  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.217504  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.715462  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.768899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:06.266111  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:07.055842  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.057870  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:07.716593  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.215089  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:08.266990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.765441  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:11.556422  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.056678  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.216340  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.715086  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.766493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.766870  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.264729  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:16.057216  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.555816  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:20.556292  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.215803  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.215927  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.265178  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.268144  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:22.556987  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.057115  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.715576  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.715814  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.716043  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.767435  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:26.269805  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:27.556197  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.057036  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.216535  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.715902  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.765730  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:31.267947  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:32.556500  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.557446  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.214627  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:35.214782  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.772856  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:36.265722  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:36.559507  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.056993  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:37.215498  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.715541  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:38.266461  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.266611  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:42.268638  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:41.555847  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.558407  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:41.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.716370  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:46.214690  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:44.765752  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:47.266258  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.056747  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.058377  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.557006  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.715456  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.716120  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.267222  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:51.765814  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:18:52.557045  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.057031  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:53.215817  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.714784  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:54.268377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:56.764471  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:57.556098  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.057165  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:57.716269  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.214149  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:58.765585  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:01.265119  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.556763  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.557107  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:02.216779  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.714940  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:03.267189  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.268332  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:07.056718  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:09.056994  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:06.715788  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.716850  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:11.215701  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:07.765389  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:10.268458  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:11.559182  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.057546  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:13.216963  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:15.715550  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:12.764850  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.766597  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.265687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:16.057835  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:18.556414  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.716374  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.216629  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:19.266849  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:21.267040  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:21.056990  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.057233  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:25.555717  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:22.715323  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:24.715818  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.765935  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:26.264839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:27.556370  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.057786  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:27.216093  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:29.715861  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:28.265912  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.266993  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:32.267566  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:32.555888  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.556782  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:31.716178  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.213813  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.213999  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.764307  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.766217  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:37.056333  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.559461  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:38.214406  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:40.214795  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.264988  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:41.267329  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.056568  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:44.057256  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:42.715261  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.215472  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:43.765595  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.766087  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:46.058519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.556533  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:47.215644  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:49.715563  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.266115  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:50.268535  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:52.271227  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.056089  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:53.056973  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:55.057853  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:51.716787  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.214943  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.765208  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:57.267687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:19:57.556056  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.557091  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:00.050404  185942 pod_ready.go:82] duration metric: took 4m0.000726571s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:00.050457  185942 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:00.050479  185942 pod_ready.go:39] duration metric: took 4m12.759391454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:00.050506  185942 kubeadm.go:597] duration metric: took 4m20.427916933s to restartPrimaryControlPlane
	W1028 12:20:00.050569  185942 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:00.050616  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:19:56.715048  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.215821  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.769397  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:02.265702  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:01.715264  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.216628  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.765386  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:06.765802  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:06.715439  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.214561  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.216343  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.265899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.764867  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:13.716362  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.214893  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:14.264333  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.765340  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:18.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:20.715790  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:19.270934  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:21.764931  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:22.715880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:25.216499  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:23.766240  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.271567  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.353961  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.303321788s)
	I1028 12:20:26.354038  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:26.373066  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:26.386209  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:26.398568  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:26.398591  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:26.398634  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:26.410916  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:26.410976  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:26.423771  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:26.435883  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:26.435961  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:26.448506  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.460449  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:26.460506  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.472817  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:26.483653  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:26.483743  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:26.494435  185942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:26.682378  185942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:27.715587  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:29.717407  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:28.766206  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:30.766289  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.820344  185942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:20:35.820446  185942 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:20:35.820555  185942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:20:35.820688  185942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:20:35.820812  185942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:20:35.820902  185942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:20:35.823423  185942 out.go:235]   - Generating certificates and keys ...
	I1028 12:20:35.823594  185942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:20:35.823700  185942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:20:35.823804  185942 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:20:35.823893  185942 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:20:35.824001  185942 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:20:35.824082  185942 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:20:35.824167  185942 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:20:35.824255  185942 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:20:35.824360  185942 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:20:35.824445  185942 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:20:35.824504  185942 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:20:35.824566  185942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:20:35.824622  185942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:20:35.824725  185942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:20:35.824805  185942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:20:35.824944  185942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:20:35.825058  185942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:20:35.825209  185942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:20:35.825300  185942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:20:35.826890  185942 out.go:235]   - Booting up control plane ...
	I1028 12:20:35.827007  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:20:35.827077  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:20:35.827142  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:20:35.827285  185942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:20:35.827420  185942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:20:35.827487  185942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:20:35.827705  185942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:20:35.827848  185942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:20:35.827943  185942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.264999ms
	I1028 12:20:35.828059  185942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:20:35.828130  185942 kubeadm.go:310] [api-check] The API server is healthy after 5.502732581s
	I1028 12:20:35.828299  185942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:20:35.828472  185942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:20:35.828523  185942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:20:35.828712  185942 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:20:35.828764  185942 kubeadm.go:310] [bootstrap-token] Using token: srdxzz.lxk56bs7sgkeocij
	I1028 12:20:35.830228  185942 out.go:235]   - Configuring RBAC rules ...
	I1028 12:20:35.830335  185942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:20:35.830422  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:20:35.830563  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:20:35.830729  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:20:35.830842  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:20:35.830928  185942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:20:35.831065  185942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:20:35.831122  185942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:20:35.831174  185942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:20:35.831181  185942 kubeadm.go:310] 
	I1028 12:20:35.831229  185942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:20:35.831237  185942 kubeadm.go:310] 
	I1028 12:20:35.831302  185942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:20:35.831313  185942 kubeadm.go:310] 
	I1028 12:20:35.831356  185942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:20:35.831439  185942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:20:35.831517  185942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:20:35.831531  185942 kubeadm.go:310] 
	I1028 12:20:35.831616  185942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:20:35.831628  185942 kubeadm.go:310] 
	I1028 12:20:35.831678  185942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:20:35.831682  185942 kubeadm.go:310] 
	I1028 12:20:35.831730  185942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:20:35.831809  185942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:20:35.831921  185942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:20:35.831933  185942 kubeadm.go:310] 
	I1028 12:20:35.832041  185942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:20:35.832141  185942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:20:35.832150  185942 kubeadm.go:310] 
	I1028 12:20:35.832249  185942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832373  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:20:35.832404  185942 kubeadm.go:310] 	--control-plane 
	I1028 12:20:35.832414  185942 kubeadm.go:310] 
	I1028 12:20:35.832516  185942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:20:35.832524  185942 kubeadm.go:310] 
	I1028 12:20:35.832642  185942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832812  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:20:35.832833  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:20:35.832843  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:20:35.834428  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:20:35.835603  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:20:35.847857  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:20:35.867921  185942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:20:35.868088  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:35.868107  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:20:35.908233  185942 ops.go:34] apiserver oom_adj: -16
	I1028 12:20:32.215299  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:34.716880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:32.766922  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.267132  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:36.121114  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:36.621188  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.122032  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.621405  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.122105  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.621960  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.122142  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.622093  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.121643  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.287609  185942 kubeadm.go:1113] duration metric: took 4.419612649s to wait for elevateKubeSystemPrivileges
	I1028 12:20:40.287656  185942 kubeadm.go:394] duration metric: took 5m0.720591132s to StartCluster
	I1028 12:20:40.287703  185942 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.287814  185942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:20:40.290472  185942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.290787  185942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:20:40.291051  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:20:40.290926  185942 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:20:40.291125  185942 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709250"
	I1028 12:20:40.291126  185942 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709250"
	I1028 12:20:40.291142  185942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709250"
	I1028 12:20:40.291148  185942 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709250"
	W1028 12:20:40.291158  185942 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:20:40.291182  185942 addons.go:69] Setting metrics-server=true in profile "embed-certs-709250"
	I1028 12:20:40.291220  185942 addons.go:234] Setting addon metrics-server=true in "embed-certs-709250"
	W1028 12:20:40.291233  185942 addons.go:243] addon metrics-server should already be in state true
	I1028 12:20:40.291282  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291195  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291593  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291631  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291727  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291771  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291786  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291813  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.292877  185942 out.go:177] * Verifying Kubernetes components...
	I1028 12:20:40.294858  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:20:40.310225  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1028 12:20:40.310814  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.311524  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.311552  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.311961  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.312174  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.312867  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1028 12:20:40.312901  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I1028 12:20:40.313354  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313389  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313964  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.313987  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.313967  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.314040  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.314365  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314428  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314883  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.314907  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.315710  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.315744  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.316210  185942 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709250"
	W1028 12:20:40.316229  185942 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:20:40.316261  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.316619  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.316648  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.331940  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1028 12:20:40.332732  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.333487  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.333537  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.333932  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.334145  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.336054  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1028 12:20:40.336291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.336441  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337079  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.337117  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.337211  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I1028 12:20:40.337597  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337998  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338171  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.338189  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.338291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.338925  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338972  185942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:20:40.339570  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.339621  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.340197  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.341080  185942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.341099  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:20:40.341115  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.341872  185942 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:20:40.343244  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:20:40.343278  185942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:20:40.343308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.344718  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345186  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.345216  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345457  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.345666  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.345842  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.346053  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.346977  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347514  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.347546  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347739  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.347936  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.348069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.348236  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.357912  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I1028 12:20:40.358358  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.358838  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.358858  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.359224  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.359441  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.361308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.361630  185942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.361654  185942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:20:40.361675  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.365789  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366319  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.366347  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366659  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.366879  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.367069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.367245  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.526205  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:20:40.545404  185942 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555003  185942 node_ready.go:49] node "embed-certs-709250" has status "Ready":"True"
	I1028 12:20:40.555028  185942 node_ready.go:38] duration metric: took 9.592797ms for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555047  185942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:40.564021  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:40.660020  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:20:40.660061  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:20:40.666435  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.691423  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.692384  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:20:40.692411  185942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:20:40.739518  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:40.739549  185942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:20:40.765228  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:37.216347  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:39.716471  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.192384  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192422  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192491  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192514  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192740  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192759  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192783  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192791  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192915  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192942  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192951  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192962  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.193093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193125  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193131  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.193373  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193403  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193409  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.229776  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.229808  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.230111  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.230127  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.624688  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.624714  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625048  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.625055  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625066  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625074  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.625081  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625283  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625312  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625325  185942 addons.go:475] Verifying addon metrics-server=true in "embed-certs-709250"
	I1028 12:20:41.625329  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.627194  185942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:20:37.771166  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:40.265616  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.265990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.628572  185942 addons.go:510] duration metric: took 1.337655555s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:20:42.572801  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.571062  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.571095  185942 pod_ready.go:82] duration metric: took 3.007040788s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.571110  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576592  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.576620  185942 pod_ready.go:82] duration metric: took 5.500425ms for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576633  185942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:45.583586  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.216524  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:44.715547  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.758721  186547 pod_ready.go:82] duration metric: took 4m0.000295852s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:43.758758  186547 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:43.758783  186547 pod_ready.go:39] duration metric: took 4m13.710127509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:43.758811  186547 kubeadm.go:597] duration metric: took 4m21.647032906s to restartPrimaryControlPlane
	W1028 12:20:43.758873  186547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:43.758910  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:47.089478  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.089502  185942 pod_ready.go:82] duration metric: took 3.512861746s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.089512  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094229  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.094255  185942 pod_ready.go:82] duration metric: took 4.736326ms for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094267  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098823  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.098859  185942 pod_ready.go:82] duration metric: took 4.584003ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098872  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104063  185942 pod_ready.go:93] pod "kube-proxy-gck6r" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.104083  185942 pod_ready.go:82] duration metric: took 5.204526ms for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104091  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168177  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.168210  185942 pod_ready.go:82] duration metric: took 64.110225ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168221  185942 pod_ready.go:39] duration metric: took 6.613160968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:47.168243  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:20:47.168309  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:47.186907  185942 api_server.go:72] duration metric: took 6.896070864s to wait for apiserver process to appear ...
	I1028 12:20:47.186944  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:20:47.186998  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:20:47.191428  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:20:47.192677  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:20:47.192708  185942 api_server.go:131] duration metric: took 5.753471ms to wait for apiserver health ...
	I1028 12:20:47.192719  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:20:47.372534  185942 system_pods.go:59] 9 kube-system pods found
	I1028 12:20:47.372571  185942 system_pods.go:61] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.372580  185942 system_pods.go:61] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.372585  185942 system_pods.go:61] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.372590  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.372595  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.372599  185942 system_pods.go:61] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.372605  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.372614  185942 system_pods.go:61] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.372620  185942 system_pods.go:61] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.372633  185942 system_pods.go:74] duration metric: took 179.905205ms to wait for pod list to return data ...
	I1028 12:20:47.372647  185942 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:20:47.569853  185942 default_sa.go:45] found service account: "default"
	I1028 12:20:47.569886  185942 default_sa.go:55] duration metric: took 197.228265ms for default service account to be created ...
	I1028 12:20:47.569900  185942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:20:47.770906  185942 system_pods.go:86] 9 kube-system pods found
	I1028 12:20:47.770941  185942 system_pods.go:89] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.770948  185942 system_pods.go:89] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.770953  185942 system_pods.go:89] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.770956  185942 system_pods.go:89] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.770960  185942 system_pods.go:89] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.770964  185942 system_pods.go:89] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.770967  185942 system_pods.go:89] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.770973  185942 system_pods.go:89] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.770977  185942 system_pods.go:89] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.770984  185942 system_pods.go:126] duration metric: took 201.078078ms to wait for k8s-apps to be running ...
	I1028 12:20:47.770990  185942 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:20:47.771033  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:47.787139  185942 system_svc.go:56] duration metric: took 16.13776ms WaitForService to wait for kubelet
	I1028 12:20:47.787171  185942 kubeadm.go:582] duration metric: took 7.496343244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:20:47.787191  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:20:47.969485  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:20:47.969516  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:20:47.969547  185942 node_conditions.go:105] duration metric: took 182.350787ms to run NodePressure ...
	I1028 12:20:47.969562  185942 start.go:241] waiting for startup goroutines ...
	I1028 12:20:47.969572  185942 start.go:246] waiting for cluster config update ...
	I1028 12:20:47.969586  185942 start.go:255] writing updated cluster config ...
	I1028 12:20:47.969916  185942 ssh_runner.go:195] Run: rm -f paused
	I1028 12:20:48.021806  185942 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:20:48.023816  185942 out.go:177] * Done! kubectl is now configured to use "embed-certs-709250" cluster and "default" namespace by default
	I1028 12:20:46.716844  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:49.216673  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:51.715101  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:53.715509  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:56.217201  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:58.715405  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:00.715890  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:03.214669  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:05.215054  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.108895  186547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.349960271s)
	I1028 12:21:10.108979  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:10.126064  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:21:10.139862  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:21:10.150752  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:21:10.150780  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:21:10.150837  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:21:10.161522  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:21:10.161604  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:21:10.172230  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:21:10.183231  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:21:10.183299  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:21:10.194261  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.204462  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:21:10.204524  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.214991  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:21:10.225246  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:21:10.225315  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:21:10.235439  186547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:21:10.280951  186547 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:21:10.281020  186547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:21:10.391997  186547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:21:10.392163  186547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:21:10.392297  186547 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:21:10.402113  186547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:21:07.217549  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:09.716985  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.404087  186547 out.go:235]   - Generating certificates and keys ...
	I1028 12:21:10.404194  186547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:21:10.404312  186547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:21:10.404441  186547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:21:10.404537  186547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:21:10.404642  186547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:21:10.404719  186547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:21:10.404824  186547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:21:10.404914  186547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:21:10.405021  186547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:21:10.405124  186547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:21:10.405185  186547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:21:10.405269  186547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:21:10.608657  186547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:21:10.910608  186547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:21:11.076768  186547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:21:11.244109  186547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:21:11.685910  186547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:21:11.686470  186547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:21:11.692266  186547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:21:11.694100  186547 out.go:235]   - Booting up control plane ...
	I1028 12:21:11.694231  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:21:11.694377  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:21:11.694607  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:21:11.713908  186547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:21:11.720788  186547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:21:11.720874  186547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:21:11.856867  186547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:21:11.856998  186547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:21:12.358968  186547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942759ms
	I1028 12:21:12.359067  186547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:21:12.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:14.208408  185546 pod_ready.go:82] duration metric: took 4m0.000135609s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:21:14.208447  185546 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:21:14.208457  185546 pod_ready.go:39] duration metric: took 4m3.200735753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:14.208485  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:14.208519  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:14.208571  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:14.266154  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.266184  185546 cri.go:89] found id: ""
	I1028 12:21:14.266196  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:14.266255  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.271416  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:14.271497  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:14.310426  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.310457  185546 cri.go:89] found id: ""
	I1028 12:21:14.310467  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:14.310529  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.314961  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:14.315037  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:14.362502  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.362530  185546 cri.go:89] found id: ""
	I1028 12:21:14.362540  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:14.362602  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.368118  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:14.368198  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:14.416827  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.416867  185546 cri.go:89] found id: ""
	I1028 12:21:14.416877  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:14.416943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.421640  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:14.421716  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:14.473506  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:14.473552  185546 cri.go:89] found id: ""
	I1028 12:21:14.473563  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:14.473627  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.480106  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:14.480183  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:14.529939  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:14.529964  185546 cri.go:89] found id: ""
	I1028 12:21:14.529971  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:14.530120  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.536199  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:14.536264  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:14.578374  185546 cri.go:89] found id: ""
	I1028 12:21:14.578407  185546 logs.go:282] 0 containers: []
	W1028 12:21:14.578419  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:14.578428  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:14.578490  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:14.620216  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:14.620243  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:14.620249  185546 cri.go:89] found id: ""
	I1028 12:21:14.620258  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:14.620323  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.625798  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.630653  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:14.630683  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:14.645364  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:14.645404  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.686202  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:14.686234  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.730094  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:14.730125  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:14.786272  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:14.786322  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:14.875705  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:14.875746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.931913  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:14.931960  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.991914  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:14.991953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:15.037022  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:15.037056  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:15.107597  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:15.107649  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:15.161401  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:15.161442  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:15.201916  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:15.201953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:15.682647  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:15.682694  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:17.861193  186547 kubeadm.go:310] [api-check] The API server is healthy after 5.502448006s
	I1028 12:21:17.874856  186547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:21:17.889216  186547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:21:17.933411  186547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:21:17.933726  186547 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-349222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:21:17.964667  186547 kubeadm.go:310] [bootstrap-token] Using token: o3vo7c.1x7759cggrb8kl7r
	I1028 12:21:17.966405  186547 out.go:235]   - Configuring RBAC rules ...
	I1028 12:21:17.966590  186547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:21:17.982231  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:21:17.991850  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:21:17.996073  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:21:18.003531  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:21:18.008369  186547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:21:18.272751  186547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:21:18.724493  186547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:21:19.269583  186547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:21:19.270654  186547 kubeadm.go:310] 
	I1028 12:21:19.270715  186547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:21:19.270722  186547 kubeadm.go:310] 
	I1028 12:21:19.270782  186547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:21:19.270787  186547 kubeadm.go:310] 
	I1028 12:21:19.270816  186547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:21:19.270875  186547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:21:19.270938  186547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:21:19.270949  186547 kubeadm.go:310] 
	I1028 12:21:19.271022  186547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:21:19.271063  186547 kubeadm.go:310] 
	I1028 12:21:19.271165  186547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:21:19.271190  186547 kubeadm.go:310] 
	I1028 12:21:19.271266  186547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:21:19.271380  186547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:21:19.271470  186547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:21:19.271479  186547 kubeadm.go:310] 
	I1028 12:21:19.271600  186547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:21:19.271697  186547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:21:19.271709  186547 kubeadm.go:310] 
	I1028 12:21:19.271838  186547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272010  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:21:19.272068  186547 kubeadm.go:310] 	--control-plane 
	I1028 12:21:19.272079  186547 kubeadm.go:310] 
	I1028 12:21:19.272250  186547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:21:19.272270  186547 kubeadm.go:310] 
	I1028 12:21:19.272391  186547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272568  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:21:19.273899  186547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:21:19.273955  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:21:19.273977  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:21:19.275868  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:21:18.355132  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:18.373260  185546 api_server.go:72] duration metric: took 4m14.615888944s to wait for apiserver process to appear ...
	I1028 12:21:18.373292  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:18.373353  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:18.373410  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:18.413207  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.413239  185546 cri.go:89] found id: ""
	I1028 12:21:18.413250  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:18.413336  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.419588  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:18.419655  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:18.476341  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.476373  185546 cri.go:89] found id: ""
	I1028 12:21:18.476383  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:18.476450  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.482835  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:18.482926  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:18.524934  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.524964  185546 cri.go:89] found id: ""
	I1028 12:21:18.524975  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:18.525040  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.530198  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:18.530284  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:18.577310  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:18.577338  185546 cri.go:89] found id: ""
	I1028 12:21:18.577349  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:18.577413  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.583048  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:18.583133  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:18.622556  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:18.622587  185546 cri.go:89] found id: ""
	I1028 12:21:18.622598  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:18.622701  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.628450  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:18.628540  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:18.674827  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:18.674861  185546 cri.go:89] found id: ""
	I1028 12:21:18.674873  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:18.674943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.680282  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:18.680354  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:18.738014  185546 cri.go:89] found id: ""
	I1028 12:21:18.738044  185546 logs.go:282] 0 containers: []
	W1028 12:21:18.738061  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:18.738070  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:18.738142  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:18.780615  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:18.780645  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:18.780651  185546 cri.go:89] found id: ""
	I1028 12:21:18.780660  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:18.780725  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.786003  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.790208  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:18.790231  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:18.806481  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:18.806523  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.853343  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:18.853382  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.906386  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:18.906424  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.948149  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:18.948182  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:19.000642  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:19.000678  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:19.038715  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:19.038744  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:19.079234  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:19.079271  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:19.147309  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:19.147349  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:19.271582  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:19.271620  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:19.319149  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:19.319195  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:19.385399  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:19.385437  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:19.811993  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:19.812035  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:19.277402  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:21:19.296307  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:21:19.323315  186547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-349222 minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=default-k8s-diff-port-349222 minikube.k8s.io/primary=true
	I1028 12:21:19.550855  186547 ops.go:34] apiserver oom_adj: -16
	I1028 12:21:19.550882  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.051004  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.551001  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.051215  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.551283  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.050989  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.551423  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.051101  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.151453  186547 kubeadm.go:1113] duration metric: took 3.828156807s to wait for elevateKubeSystemPrivileges
	I1028 12:21:23.151505  186547 kubeadm.go:394] duration metric: took 5m1.103220882s to StartCluster
	I1028 12:21:23.151530  186547 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.151623  186547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:21:23.153557  186547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.153874  186547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:21:23.153996  186547 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:21:23.154101  186547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154122  186547 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154133  186547 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:21:23.154128  186547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154165  186547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-349222"
	I1028 12:21:23.154160  186547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154197  186547 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154213  186547 addons.go:243] addon metrics-server should already be in state true
	I1028 12:21:23.154167  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154254  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154664  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154679  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154749  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154135  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:21:23.154803  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154844  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154948  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.155649  186547 out.go:177] * Verifying Kubernetes components...
	I1028 12:21:23.157234  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:21:23.172278  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I1028 12:21:23.172870  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.173402  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.173429  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.173851  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.174056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.176299  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1028 12:21:23.176307  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I1028 12:21:23.176897  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177023  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177553  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177576  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177589  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177606  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177887  186547 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.177912  186547 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:21:23.177945  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.177971  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178030  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178369  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178404  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178541  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178572  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178961  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.179002  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.196089  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1028 12:21:23.197979  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.198578  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.198607  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.199082  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.199301  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.199604  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1028 12:21:23.200120  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.200519  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.200539  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.200938  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.201204  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.201711  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.201794  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1028 12:21:23.202225  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.202937  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.202956  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.203305  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.203753  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.203791  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.204026  186547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:21:23.204210  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.205470  186547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:21:23.205490  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:21:23.205554  186547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:21:23.205576  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.207334  186547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.207352  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:21:23.207372  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.209573  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.210230  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210366  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.210608  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.210806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.211061  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.211884  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.211910  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.211928  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.212104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.212351  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.212570  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.212762  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.231664  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1028 12:21:23.232283  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.232904  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.232929  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.233414  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.233658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.236162  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.236665  186547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.236680  186547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:21:23.236700  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.240368  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.240697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240848  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.241034  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.241156  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.241281  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.409461  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:21:23.430686  186547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442439  186547 node_ready.go:49] node "default-k8s-diff-port-349222" has status "Ready":"True"
	I1028 12:21:23.442466  186547 node_ready.go:38] duration metric: took 11.749381ms for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442480  186547 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:23.447741  186547 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:23.515393  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.545556  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.575253  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:21:23.575280  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:21:23.663892  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:21:23.663920  186547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:21:23.745621  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:23.745656  186547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:21:23.823360  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:24.391754  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.391789  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.392092  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.392112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.392123  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.392130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393697  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393716  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.393725  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.393733  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393810  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393828  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393886  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394088  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.394112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.413957  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.414000  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.414363  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.414385  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853053  186547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029641945s)
	I1028 12:21:24.853107  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853123  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853434  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.853492  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853501  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853518  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853543  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853784  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853801  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853813  186547 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-349222"
	I1028 12:21:24.855707  186547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:21:22.373623  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:21:22.379559  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:21:22.380750  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:22.380772  185546 api_server.go:131] duration metric: took 4.007460794s to wait for apiserver health ...
	I1028 12:21:22.380783  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:22.380811  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:22.380875  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:22.426678  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:22.426710  185546 cri.go:89] found id: ""
	I1028 12:21:22.426720  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:22.426781  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.431942  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:22.432014  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:22.472504  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:22.472531  185546 cri.go:89] found id: ""
	I1028 12:21:22.472540  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:22.472595  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.478446  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:22.478511  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:22.520149  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.520169  185546 cri.go:89] found id: ""
	I1028 12:21:22.520177  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:22.520235  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.525716  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:22.525804  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:22.564801  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:22.564832  185546 cri.go:89] found id: ""
	I1028 12:21:22.564844  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:22.564909  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.570065  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:22.570147  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:22.613601  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.613628  185546 cri.go:89] found id: ""
	I1028 12:21:22.613637  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:22.613700  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.618413  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:22.618483  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:22.664329  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.664358  185546 cri.go:89] found id: ""
	I1028 12:21:22.664369  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:22.664430  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.669013  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:22.669084  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:22.706046  185546 cri.go:89] found id: ""
	I1028 12:21:22.706074  185546 logs.go:282] 0 containers: []
	W1028 12:21:22.706084  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:22.706091  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:22.706159  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:22.747718  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.747744  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.747750  185546 cri.go:89] found id: ""
	I1028 12:21:22.747759  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:22.747825  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.752857  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.758383  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:22.758410  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.800846  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:22.800882  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.858663  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:22.858702  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.896915  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:22.896959  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.938476  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:22.938503  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.984601  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:22.984628  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:23.000223  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:23.000259  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:23.130709  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:23.130746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:23.189821  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:23.189859  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:23.244463  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:23.244535  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:23.299279  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:23.299318  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:23.714691  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:23.714730  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:23.777703  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:23.777749  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:26.364133  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:21:26.364166  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.364171  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.364175  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.364179  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.364182  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.364185  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.364191  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.364195  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.364201  185546 system_pods.go:74] duration metric: took 3.98341316s to wait for pod list to return data ...
	I1028 12:21:26.364209  185546 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:26.366899  185546 default_sa.go:45] found service account: "default"
	I1028 12:21:26.366925  185546 default_sa.go:55] duration metric: took 2.710943ms for default service account to be created ...
	I1028 12:21:26.366934  185546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:26.371193  185546 system_pods.go:86] 8 kube-system pods found
	I1028 12:21:26.371219  185546 system_pods.go:89] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.371224  185546 system_pods.go:89] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.371228  185546 system_pods.go:89] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.371233  185546 system_pods.go:89] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.371237  185546 system_pods.go:89] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.371240  185546 system_pods.go:89] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.371246  185546 system_pods.go:89] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.371250  185546 system_pods.go:89] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.371257  185546 system_pods.go:126] duration metric: took 4.318058ms to wait for k8s-apps to be running ...
	I1028 12:21:26.371265  185546 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:26.371317  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:26.389093  185546 system_svc.go:56] duration metric: took 17.81758ms WaitForService to wait for kubelet
	I1028 12:21:26.389131  185546 kubeadm.go:582] duration metric: took 4m22.631766189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:26.389158  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:26.392700  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:26.392728  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:26.392741  185546 node_conditions.go:105] duration metric: took 3.576663ms to run NodePressure ...
	I1028 12:21:26.392757  185546 start.go:241] waiting for startup goroutines ...
	I1028 12:21:26.392766  185546 start.go:246] waiting for cluster config update ...
	I1028 12:21:26.392781  185546 start.go:255] writing updated cluster config ...
	I1028 12:21:26.393086  185546 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:26.444274  185546 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:26.446322  185546 out.go:177] * Done! kubectl is now configured to use "no-preload-871884" cluster and "default" namespace by default
	I1028 12:21:24.856866  186547 addons.go:510] duration metric: took 1.702877543s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:21:25.462800  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:27.954511  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:30.454530  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.455161  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.955218  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.955242  186547 pod_ready.go:82] duration metric: took 9.507473956s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.955253  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.960990  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.961018  186547 pod_ready.go:82] duration metric: took 5.757431ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.961032  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966957  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.966981  186547 pod_ready.go:82] duration metric: took 5.940549ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966991  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972168  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.972194  186547 pod_ready.go:82] duration metric: took 5.195057ms for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972205  186547 pod_ready.go:39] duration metric: took 9.529713389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:32.972224  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:32.972294  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:32.988675  186547 api_server.go:72] duration metric: took 9.83476496s to wait for apiserver process to appear ...
	I1028 12:21:32.988711  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:32.988736  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:21:32.993068  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:21:32.994352  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:32.994377  186547 api_server.go:131] duration metric: took 5.656136ms to wait for apiserver health ...
	I1028 12:21:32.994387  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:32.999982  186547 system_pods.go:59] 9 kube-system pods found
	I1028 12:21:33.000010  186547 system_pods.go:61] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.000017  186547 system_pods.go:61] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.000024  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.000029  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.000033  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.000037  186547 system_pods.go:61] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.000040  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.000046  186547 system_pods.go:61] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.000051  186547 system_pods.go:61] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.000064  186547 system_pods.go:74] duration metric: took 5.66991ms to wait for pod list to return data ...
	I1028 12:21:33.000075  186547 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:33.003124  186547 default_sa.go:45] found service account: "default"
	I1028 12:21:33.003149  186547 default_sa.go:55] duration metric: took 3.067652ms for default service account to be created ...
	I1028 12:21:33.003159  186547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:33.155864  186547 system_pods.go:86] 9 kube-system pods found
	I1028 12:21:33.155902  186547 system_pods.go:89] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.155914  186547 system_pods.go:89] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.155921  186547 system_pods.go:89] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.155931  186547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.155938  186547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.155943  186547 system_pods.go:89] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.155948  186547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.155956  186547 system_pods.go:89] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.155965  186547 system_pods.go:89] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.155977  186547 system_pods.go:126] duration metric: took 152.809784ms to wait for k8s-apps to be running ...
	I1028 12:21:33.155991  186547 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:33.156049  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:33.171592  186547 system_svc.go:56] duration metric: took 15.589436ms WaitForService to wait for kubelet
	I1028 12:21:33.171647  186547 kubeadm.go:582] duration metric: took 10.017726239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:33.171672  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:33.352932  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:33.352984  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:33.352995  186547 node_conditions.go:105] duration metric: took 181.317488ms to run NodePressure ...
	I1028 12:21:33.353006  186547 start.go:241] waiting for startup goroutines ...
	I1028 12:21:33.353014  186547 start.go:246] waiting for cluster config update ...
	I1028 12:21:33.353024  186547 start.go:255] writing updated cluster config ...
	I1028 12:21:33.353314  186547 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:33.405276  186547 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:33.407589  186547 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-349222" cluster and "default" namespace by default
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.187247599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118787187220496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76f3dbed-c60b-4d9b-ad8c-875578bf5ead name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.187996642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4457e8c-659c-474c-9204-d56167d23d73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.188084307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4457e8c-659c-474c-9204-d56167d23d73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.188143516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e4457e8c-659c-474c-9204-d56167d23d73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.224817692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab817e35-8262-43db-b78f-89366e1c6ebb name=/runtime.v1.RuntimeService/Version
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.224971297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab817e35-8262-43db-b78f-89366e1c6ebb name=/runtime.v1.RuntimeService/Version
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.226598270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6536d96-5d51-4840-9ec1-caa52f08ce1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.227208070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118787227179040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6536d96-5d51-4840-9ec1-caa52f08ce1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.227901792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae95b60f-fe02-437f-add8-6d84d9ca44e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.227980189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae95b60f-fe02-437f-add8-6d84d9ca44e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.228017511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae95b60f-fe02-437f-add8-6d84d9ca44e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.263711952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e83fe3f-daae-41a6-8d78-a1c8a8caae8d name=/runtime.v1.RuntimeService/Version
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.263787014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e83fe3f-daae-41a6-8d78-a1c8a8caae8d name=/runtime.v1.RuntimeService/Version
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.265214920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b3e6079-b41e-448f-bd8e-ea1a24b3164c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.265644629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118787265621267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b3e6079-b41e-448f-bd8e-ea1a24b3164c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.266263248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8eaac0b-a839-4ace-a63b-77e65b6ed40d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.266308111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8eaac0b-a839-4ace-a63b-77e65b6ed40d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.266343166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e8eaac0b-a839-4ace-a63b-77e65b6ed40d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.301103098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e41e4e1-9f3b-4866-a658-48e38810c5c0 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.301187344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e41e4e1-9f3b-4866-a658-48e38810c5c0 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.302333021Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=779b4424-6cb8-475f-8c7b-2e608bb52482 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.302737681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118787302715682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=779b4424-6cb8-475f-8c7b-2e608bb52482 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.303618042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfa9308a-d857-4937-9c37-2c9fd32e16b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.303671694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfa9308a-d857-4937-9c37-2c9fd32e16b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:33:07 old-k8s-version-089993 crio[635]: time="2024-10-28 12:33:07.303715379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dfa9308a-d857-4937-9c37-2c9fd32e16b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 12:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049869] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.987135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.705731] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652068] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.124100] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.059356] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067583] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.203906] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.129426] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.273379] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Oct28 12:16] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.076324] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.030052] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[ +12.368021] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 12:20] systemd-fstab-generator[5004]: Ignoring "noauto" option for root device
	[Oct28 12:22] systemd-fstab-generator[5284]: Ignoring "noauto" option for root device
	[  +0.072681] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:33:07 up 17 min,  0 users,  load average: 0.17, 0.06, 0.05
	Linux old-k8s-version-089993 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.SimplePageFunc.func1(0x4f7fe00, 0xc000120010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:40 +0x64
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.(*ListPager).List(0xc0009abe60, 0x4f7fe00, 0xc000120010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:91 +0x179
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc000d88660, 0xc00094a460, 0xc0008a18c0, 0xc000b758c0, 0xc000b4b508, 0xc000b758d0, 0xc0007ea6c0)
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: goroutine 163 [runnable]:
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: net/http.(*Transport).dialConn(0xc0008a8000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0007ea780, 0x5, 0xc000d8a300, 0x24, 0x0, 0x0, ...)
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /usr/local/go/src/net/http/transport.go:1535 +0xbe
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: net/http.(*Transport).dialConnFor(0xc0008a8000, 0xc000565d90)
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]: created by net/http.(*Transport).queueForDial
	Oct 28 12:33:02 old-k8s-version-089993 kubelet[6466]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 28 12:33:02 old-k8s-version-089993 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 28 12:33:02 old-k8s-version-089993 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 28 12:33:03 old-k8s-version-089993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 28 12:33:03 old-k8s-version-089993 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 12:33:03 old-k8s-version-089993 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 12:33:03 old-k8s-version-089993 kubelet[6475]: I1028 12:33:03.233740    6475 server.go:416] Version: v1.20.0
	Oct 28 12:33:03 old-k8s-version-089993 kubelet[6475]: I1028 12:33:03.234141    6475 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 12:33:03 old-k8s-version-089993 kubelet[6475]: I1028 12:33:03.235956    6475 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 12:33:03 old-k8s-version-089993 kubelet[6475]: W1028 12:33:03.237125    6475 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 28 12:33:03 old-k8s-version-089993 kubelet[6475]: I1028 12:33:03.237191    6475 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (239.29286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-089993" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (483.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709250 -n embed-certs-709250
E1028 12:37:53.022945  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 12:37:53.597128712 +0000 UTC m=+6196.265353092
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-709250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-709250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.835µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-709250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250
E1028 12:37:53.665720  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-709250 logs -n 25
E1028 12:37:54.947634  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-709250 logs -n 25: (1.406563721s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo journalctl                       | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo docker                           | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo                                  | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo cat                              | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo containerd                       | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo systemctl                        | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo find                             | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-903216 sudo crio                             | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-903216                                       | auto-903216   | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC | 28 Oct 24 12:37 UTC |
	| start   | -p calico-903216 --memory=3072                       | calico-903216 | jenkins | v1.34.0 | 28 Oct 24 12:37 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:37:53
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:37:53.637059  196326 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:37:53.637304  196326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:37:53.637343  196326 out.go:358] Setting ErrFile to fd 2...
	I1028 12:37:53.637359  196326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:37:53.637705  196326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:37:53.638570  196326 out.go:352] Setting JSON to false
	I1028 12:37:53.640101  196326 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8417,"bootTime":1730110657,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:37:53.640232  196326 start.go:139] virtualization: kvm guest
	I1028 12:37:53.642400  196326 out.go:177] * [calico-903216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:37:53.644490  196326 notify.go:220] Checking for updates...
	I1028 12:37:53.644499  196326 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:37:53.646242  196326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:37:53.648072  196326 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:37:53.650281  196326 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:37:53.651692  196326 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:37:53.653158  196326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:37:53.655120  196326 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:37:53.655222  196326 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:37:53.655319  196326 config.go:182] Loaded profile config "kindnet-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:37:53.655403  196326 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:37:53.702136  196326 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:37:53.703576  196326 start.go:297] selected driver: kvm2
	I1028 12:37:53.703591  196326 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:37:53.703605  196326 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:37:53.704793  196326 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:37:53.704877  196326 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:37:53.722033  196326 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:37:53.722098  196326 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 12:37:53.722393  196326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:37:53.722436  196326 cni.go:84] Creating CNI manager for "calico"
	I1028 12:37:53.722446  196326 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1028 12:37:53.722511  196326 start.go:340] cluster config:
	{Name:calico-903216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-903216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:37:53.722644  196326 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:37:53.724544  196326 out.go:177] * Starting "calico-903216" primary control-plane node in "calico-903216" cluster
	I1028 12:37:53.726277  196326 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:37:53.726342  196326 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:37:53.726349  196326 cache.go:56] Caching tarball of preloaded images
	I1028 12:37:53.726444  196326 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:37:53.726455  196326 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:37:53.726573  196326 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/calico-903216/config.json ...
	I1028 12:37:53.726594  196326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/calico-903216/config.json: {Name:mk84adc65352c8f5899319260285cfbd3d16bb91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:37:53.726772  196326 start.go:360] acquireMachinesLock for calico-903216: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:37:53.726807  196326 start.go:364] duration metric: took 20.716µs to acquireMachinesLock for "calico-903216"
	I1028 12:37:53.726822  196326 start.go:93] Provisioning new machine with config: &{Name:calico-903216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:calico-903216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:37:53.726877  196326 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.343898027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119074343877555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7128719e-eca8-4b79-89ef-c0375a916de1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.344798691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd05b73a-5846-4b23-918e-16ef2cedbf2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.344850756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd05b73a-5846-4b23-918e-16ef2cedbf2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.345040678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd05b73a-5846-4b23-918e-16ef2cedbf2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.385029398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8dd718b2-ee3c-4d6f-b23b-ff891b733efa name=/runtime.v1.RuntimeService/Version
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.385162295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8dd718b2-ee3c-4d6f-b23b-ff891b733efa name=/runtime.v1.RuntimeService/Version
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.386683093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a59b3959-ed7a-4195-8f77-947bb26e1a63 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.387170272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119074387145977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a59b3959-ed7a-4195-8f77-947bb26e1a63 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.387854980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acf66943-5a00-4716-8663-787cd68e4ecb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.387907653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acf66943-5a00-4716-8663-787cd68e4ecb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.388147238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=acf66943-5a00-4716-8663-787cd68e4ecb name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.428383440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b575e308-fafe-4229-a006-a995b96789fb name=/runtime.v1.RuntimeService/Version
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.428459048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b575e308-fafe-4229-a006-a995b96789fb name=/runtime.v1.RuntimeService/Version
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.429892389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee479b5c-953c-42a4-b6d2-ad0c7372c64c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.430466420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119074430438388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee479b5c-953c-42a4-b6d2-ad0c7372c64c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.431195106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e99ec80-d00a-4d15-bf11-046f99888906 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.431264375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e99ec80-d00a-4d15-bf11-046f99888906 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.431818513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e99ec80-d00a-4d15-bf11-046f99888906 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.474497429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3866d1f9-1acd-44ce-a76e-9589c6f38ea2 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.474593627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3866d1f9-1acd-44ce-a76e-9589c6f38ea2 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.475550837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63970850-5c13-405e-b3f4-c7411dc2a8eb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.475965673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119074475943803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63970850-5c13-405e-b3f4-c7411dc2a8eb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.476447428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dcf388c-ff04-4b39-9534-4eadeafe87f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.476516601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dcf388c-ff04-4b39-9534-4eadeafe87f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:37:54 embed-certs-709250 crio[706]: time="2024-10-28 12:37:54.476718192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835,PodSandboxId:3bb168d9739ed55468053aa4a0428fbd52382211ae5a568cb63d30a3c2910169,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730118042039805980,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gck6r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f06472ac-a4c8-4982-822b-29fccd838314,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478,PodSandboxId:b763e86b15fbb6a25dcd7f5849a0889da8e2943a502f04d7a0dcea3b9708b926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042130329753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-p59fl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59ad8040-64c4-429c-905e-29f8b65e4477,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef,PodSandboxId:d8346dff9c0fdc11ba74a942e8f6ffdd2f9cd7327df000f7d1ca4cd456c1ea3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118042077836202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sx86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c1f7ad-7f31-4280-99e
3-70594c81237f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4,PodSandboxId:081abd61e8838984219cb13d3f5e4f495e42492b2041b74cd8ecdd603795eb81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17301180418
83763081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6b66608-d85e-4dfd-96ab-a1295165e2f4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539,PodSandboxId:4ca30b73fad62d4ac47a668f7c4659f9e93021d70c2be2642eaa8ea8215e5358,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118029503300428,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72aabf3490eca4c8563018a0851e820,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047,PodSandboxId:2b5ab72e160723f7694f0c78de4cf6cb25155fe7ffad2cc3c78264ea034fa0b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118029489724049,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d6570bdc3ed484abffaeb0ecd8cf2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d,PodSandboxId:9d26c057428780f96661a5f64af6bdc8b7deab968ab153c8ced460411d33efa9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118029426809985,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82,PodSandboxId:822adcfd48466fe4de6163c7a2bb5d869f7415325661236f5111c7d16495758b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118029396882352,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a64489a3b53ca866d51ea7866e987303,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe,PodSandboxId:c1a27a87cb0a26c105d25a553403aac88105befc98f8ded2a26116cf5aa54c15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117742061056735,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-709250,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1091710638993300084b6e9c9fba5922,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dcf388c-ff04-4b39-9534-4eadeafe87f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66cad90f41b3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   b763e86b15fbb       coredns-7c65d6cfc9-p59fl
	a806d8aeab6c3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   d8346dff9c0fd       coredns-7c65d6cfc9-sx86n
	1a152ca26f66c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 minutes ago      Running             kube-proxy                0                   3bb168d9739ed       kube-proxy-gck6r
	14eb80a56c8ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   081abd61e8838       storage-provisioner
	be038350ba056       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   4ca30b73fad62       etcd-embed-certs-709250
	b6ec6c57ee1eb       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   17 minutes ago      Running             kube-controller-manager   2                   2b5ab72e16072       kube-controller-manager-embed-certs-709250
	6c09319a03cd6       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   17 minutes ago      Running             kube-apiserver            2                   9d26c05742878       kube-apiserver-embed-certs-709250
	30e6fb27555e9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   17 minutes ago      Running             kube-scheduler            2                   822adcfd48466       kube-scheduler-embed-certs-709250
	a285c6010e358       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 minutes ago      Exited              kube-apiserver            1                   c1a27a87cb0a2       kube-apiserver-embed-certs-709250
	
	
	==> coredns [66cad90f41b3ff13cea8abeddf1c30cd0c70afbc78b5bdf097eac4e4a443f478] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a806d8aeab6c357d50125044b802f410bfceea0005ddb47889d8a1faf2d07bef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-709250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-709250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=embed-certs-709250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:20:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-709250
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:37:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:36:06 +0000   Mon, 28 Oct 2024 12:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:36:06 +0000   Mon, 28 Oct 2024 12:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:36:06 +0000   Mon, 28 Oct 2024 12:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:36:06 +0000   Mon, 28 Oct 2024 12:20:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    embed-certs-709250
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6e4b62e9df843e4bbd9e383d70b7bdb
	  System UUID:                e6e4b62e-9df8-43e4-bbd9-e383d70b7bdb
	  Boot ID:                    33d35854-6802-40c2-bc8d-c766fd7fca9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-p59fl                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-sx86n                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-embed-certs-709250                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-embed-certs-709250             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-embed-certs-709250    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-gck6r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-embed-certs-709250             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-wwlqv               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node embed-certs-709250 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node embed-certs-709250 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node embed-certs-709250 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node embed-certs-709250 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node embed-certs-709250 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node embed-certs-709250 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node embed-certs-709250 event: Registered Node embed-certs-709250 in Controller
	
	
	==> dmesg <==
	[  +0.053106] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042646] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.956548] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.943177] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652934] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.766588] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.064740] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065313] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.207859] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.126120] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.308135] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +4.434809] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +0.056785] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.122474] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +4.581784] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.850911] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 12:20] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.184049] systemd-fstab-generator[2552]: Ignoring "noauto" option for root device
	[  +4.494525] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.069154] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +5.534431] systemd-fstab-generator[3002]: Ignoring "noauto" option for root device
	[  +0.098216] kauditd_printk_skb: 14 callbacks suppressed
	[Oct28 12:21] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [be038350ba0561c80512791c25946177e679bc87a18b041091ff1fc6105d1539] <==
	{"level":"info","ts":"2024-10-28T12:36:34.836780Z","caller":"traceutil/trace.go:171","msg":"trace[2068609999] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1261; }","duration":"161.799271ms","start":"2024-10-28T12:36:34.674971Z","end":"2024-10-28T12:36:34.836771Z","steps":["trace[2068609999] 'agreement among raft nodes before linearized reading'  (duration: 161.711035ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:36:34.836721Z","caller":"traceutil/trace.go:171","msg":"trace[1499926977] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1261; }","duration":"225.695928ms","start":"2024-10-28T12:36:34.611015Z","end":"2024-10-28T12:36:34.836711Z","steps":["trace[1499926977] 'agreement among raft nodes before linearized reading'  (duration: 225.523445ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:36:35.439191Z","caller":"traceutil/trace.go:171","msg":"trace[1576835260] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"119.458998ms","start":"2024-10-28T12:36:35.319708Z","end":"2024-10-28T12:36:35.439167Z","steps":["trace[1576835260] 'process raft request'  (duration: 119.129869ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:36:54.959656Z","caller":"traceutil/trace.go:171","msg":"trace[219154930] transaction","detail":"{read_only:false; response_revision:1277; number_of_response:1; }","duration":"755.121613ms","start":"2024-10-28T12:36:54.204510Z","end":"2024-10-28T12:36:54.959631Z","steps":["trace[219154930] 'process raft request'  (duration: 754.987108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:36:54.960285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:36:54.204496Z","time spent":"755.385742ms","remote":"127.0.0.1:39664","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1276 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-28T12:36:55.229671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.367911ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:36:55.230454Z","caller":"traceutil/trace.go:171","msg":"trace[1714313908] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1277; }","duration":"151.164008ms","start":"2024-10-28T12:36:55.079266Z","end":"2024-10-28T12:36:55.230430Z","steps":["trace[1714313908] 'range keys from in-memory index tree'  (duration: 150.35396ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:36:55.230337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.907875ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11303352075008751891 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1cdd92d312123912>","response":"size:41"}
	{"level":"info","ts":"2024-10-28T12:36:55.230588Z","caller":"traceutil/trace.go:171","msg":"trace[2104317488] linearizableReadLoop","detail":"{readStateIndex:1491; appliedIndex:1489; }","duration":"619.835985ms","start":"2024-10-28T12:36:54.610743Z","end":"2024-10-28T12:36:55.230579Z","steps":["trace[2104317488] 'read index received'  (duration: 348.759804ms)","trace[2104317488] 'applied index is now lower than readState.Index'  (duration: 271.075317ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:36:55.230696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:36:54.205413Z","time spent":"1.025276944s","remote":"127.0.0.1:39530","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-10-28T12:36:55.230731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"619.99168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:36:55.230782Z","caller":"traceutil/trace.go:171","msg":"trace[1113880152] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1277; }","duration":"620.053981ms","start":"2024-10-28T12:36:54.610719Z","end":"2024-10-28T12:36:55.230773Z","steps":["trace[1113880152] 'agreement among raft nodes before linearized reading'  (duration: 619.966265ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:36:55.230842Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:36:54.610686Z","time spent":"620.146305ms","remote":"127.0.0.1:39686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-28T12:36:55.489325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.638008ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11303352075008751894 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.211\" mod_revision:1269 > success:<request_put:<key:\"/registry/masterleases/192.168.39.211\" value_size:67 lease:2079980038153976082 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.211\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T12:36:55.490326Z","caller":"traceutil/trace.go:171","msg":"trace[581960079] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"258.163517ms","start":"2024-10-28T12:36:55.232149Z","end":"2024-10-28T12:36:55.490312Z","steps":["trace[581960079] 'process raft request'  (duration: 122.475736ms)","trace[581960079] 'compare'  (duration: 134.18705ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:36:55.743700Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.957176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:36:55.744236Z","caller":"traceutil/trace.go:171","msg":"trace[252363339] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1278; }","duration":"134.500198ms","start":"2024-10-28T12:36:55.609717Z","end":"2024-10-28T12:36:55.744217Z","steps":["trace[252363339] 'range keys from in-memory index tree'  (duration: 133.906173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:37:42.821535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.2135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T12:37:42.821993Z","caller":"traceutil/trace.go:171","msg":"trace[95313399] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1316; }","duration":"227.698035ms","start":"2024-10-28T12:37:42.594267Z","end":"2024-10-28T12:37:42.821965Z","steps":["trace[95313399] 'count revisions from in-memory index tree'  (duration: 227.161138ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:37:42.821775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.494924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T12:37:42.822452Z","caller":"traceutil/trace.go:171","msg":"trace[354891854] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1316; }","duration":"148.176345ms","start":"2024-10-28T12:37:42.674265Z","end":"2024-10-28T12:37:42.822441Z","steps":["trace[354891854] 'count revisions from in-memory index tree'  (duration: 147.441518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:37:42.821808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.130095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-28T12:37:42.821835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.396801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:37:42.823540Z","caller":"traceutil/trace.go:171","msg":"trace[624537017] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1316; }","duration":"209.095037ms","start":"2024-10-28T12:37:42.614432Z","end":"2024-10-28T12:37:42.823527Z","steps":["trace[624537017] 'count revisions from in-memory index tree'  (duration: 207.343068ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:37:42.823435Z","caller":"traceutil/trace.go:171","msg":"trace[978901687] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1316; }","duration":"215.685498ms","start":"2024-10-28T12:37:42.607670Z","end":"2024-10-28T12:37:42.823356Z","steps":["trace[978901687] 'range keys from in-memory index tree'  (duration: 214.069288ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:37:54 up 22 min,  0 users,  load average: 0.24, 0.20, 0.18
	Linux embed-certs-709250 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c09319a03cd6fc4e7b92df78620192d54885cf982801d6f4ae3638fa0bb0a4d] <==
	I1028 12:33:33.342333       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:33:33.342322       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:35:32.341177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:35:32.341421       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 12:35:33.343554       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:35:33.343774       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:35:33.343615       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:35:33.344036       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:35:33.345187       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:35:33.345234       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:36:33.346312       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:36:33.346418       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:36:33.346541       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:36:33.346658       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:36:33.347631       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:36:33.347784       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a285c6010e35886eba140354599221c6822f9e1d3c0370a4001b24894ae0defe] <==
	W1028 12:20:22.188562       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.207375       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.213172       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.217804       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.229435       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.259347       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.294387       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.309384       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.386238       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.444648       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.459447       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.467170       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.468598       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.499317       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.526504       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.561710       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.647833       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.698537       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.797604       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.959630       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:22.979942       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.122195       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.167447       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.168838       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:20:23.256854       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b6ec6c57ee1ebbb4dd22d98288839f4b5fe3ad235d762c554effa1cbbcbd9047] <==
	E1028 12:32:39.267685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:32:39.929958       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:33:09.274532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:33:09.938555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:33:39.281458       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:33:39.947875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:34:09.288682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:34:09.956852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:34:39.299017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:34:39.965403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:35:09.305839       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:35:09.974032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:35:39.312783       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:35:39.983465       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:36:06.901604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-709250"
	E1028 12:36:09.321567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:36:09.993053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:36:39.330809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:36:40.004805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:36:47.206288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="363.782µs"
	I1028 12:37:02.197872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="165.997µs"
	E1028 12:37:09.338826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:37:10.017042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:37:39.346246       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:37:40.027450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a152ca26f66cbcaf82b768858ea162d9ae60de9ee938ee5bb3ee0e3088d9835] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:20:42.649400       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:20:42.664873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	E1028 12:20:42.664968       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:20:42.708116       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:20:42.708167       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:20:42.708200       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:20:42.711039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:20:42.711446       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:20:42.711475       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:20:42.712745       1 config.go:199] "Starting service config controller"
	I1028 12:20:42.712787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:20:42.712814       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:20:42.712818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:20:42.713359       1 config.go:328] "Starting node config controller"
	I1028 12:20:42.713391       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:20:42.813584       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:20:42.813672       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:20:42.813697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [30e6fb27555e9f5a2c2f3442702674829f0e267f75fbec5b8bcd434c802d6d82] <==
	W1028 12:20:33.256472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:20:33.256521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.258720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:20:33.258809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.282338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.282586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.323223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.323341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.359951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:20:33.360789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.429978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 12:20:33.430191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.459404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1028 12:20:33.459646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.465840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.465875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.490925       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:20:33.491056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 12:20:33.658673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 12:20:33.658813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.677632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1028 12:20:33.677753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:20:33.685149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:20:33.685266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1028 12:20:35.450004       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:36:45 embed-certs-709250 kubelet[2879]: E1028 12:36:45.476201    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119005475470216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:36:45 embed-certs-709250 kubelet[2879]: E1028 12:36:45.476598    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119005475470216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:36:47 embed-certs-709250 kubelet[2879]: E1028 12:36:47.179592    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:36:55 embed-certs-709250 kubelet[2879]: E1028 12:36:55.478505    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119015478006946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:36:55 embed-certs-709250 kubelet[2879]: E1028 12:36:55.478528    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119015478006946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:02 embed-certs-709250 kubelet[2879]: E1028 12:37:02.177860    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:37:05 embed-certs-709250 kubelet[2879]: E1028 12:37:05.480338    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119025479619114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:05 embed-certs-709250 kubelet[2879]: E1028 12:37:05.480368    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119025479619114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:15 embed-certs-709250 kubelet[2879]: E1028 12:37:15.492581    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119035491389217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:15 embed-certs-709250 kubelet[2879]: E1028 12:37:15.492783    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119035491389217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:17 embed-certs-709250 kubelet[2879]: E1028 12:37:17.175576    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:37:25 embed-certs-709250 kubelet[2879]: E1028 12:37:25.495937    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119045495289952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:25 embed-certs-709250 kubelet[2879]: E1028 12:37:25.495995    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119045495289952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:28 embed-certs-709250 kubelet[2879]: E1028 12:37:28.175183    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]: E1028 12:37:35.197484    2879 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]: E1028 12:37:35.498421    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119055497789242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:35 embed-certs-709250 kubelet[2879]: E1028 12:37:35.498482    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119055497789242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:41 embed-certs-709250 kubelet[2879]: E1028 12:37:41.177171    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	Oct 28 12:37:45 embed-certs-709250 kubelet[2879]: E1028 12:37:45.501698    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119065500840297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:45 embed-certs-709250 kubelet[2879]: E1028 12:37:45.502329    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119065500840297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:55 embed-certs-709250 kubelet[2879]: E1028 12:37:55.176528    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wwlqv" podUID="40ea7346-36fe-4d24-b4d3-1d12e1211182"
	
	
	==> storage-provisioner [14eb80a56c8ed084ecacb6b9c43e29e8b07d7ba5ab87e109ff549fb54b3785f4] <==
	I1028 12:20:42.407161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:20:42.537353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:20:42.537471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:20:42.557894       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:20:42.558671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb2d0ff7-983a-459a-a2dd-54680a334af3", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-709250_f4ce2431-4ded-4f52-8ad7-e27599efb83d became leader
	I1028 12:20:42.561426       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-709250_f4ce2431-4ded-4f52-8ad7-e27599efb83d!
	I1028 12:20:42.664512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-709250_f4ce2431-4ded-4f52-8ad7-e27599efb83d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-709250 -n embed-certs-709250
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-709250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-wwlqv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-709250 describe pod metrics-server-6867b74b74-wwlqv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-709250 describe pod metrics-server-6867b74b74-wwlqv: exit status 1 (77.781236ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-wwlqv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-709250 describe pod metrics-server-6867b74b74-wwlqv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (483.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-871884 -n no-preload-871884
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 12:35:59.42129881 +0000 UTC m=+6082.089523194
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-871884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-871884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.428µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-871884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-871884 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-871884 logs -n 25: (1.365303974s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:35 UTC | 28 Oct 24 12:35 UTC |
	| start   | -p newest-cni-604556 --memory=2200 --alsologtostderr   | newest-cni-604556            | jenkins | v1.34.0 | 28 Oct 24 12:35 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:35:20
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:35:20.010516  192895 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:35:20.010642  192895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:35:20.010652  192895 out.go:358] Setting ErrFile to fd 2...
	I1028 12:35:20.010658  192895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:35:20.010885  192895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:35:20.011485  192895 out.go:352] Setting JSON to false
	I1028 12:35:20.012512  192895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8263,"bootTime":1730110657,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:35:20.012635  192895 start.go:139] virtualization: kvm guest
	I1028 12:35:20.015171  192895 out.go:177] * [newest-cni-604556] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:35:20.016719  192895 notify.go:220] Checking for updates...
	I1028 12:35:20.016733  192895 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:35:20.018239  192895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:35:20.019634  192895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:35:20.020981  192895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:35:20.022345  192895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:35:20.023592  192895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:35:20.025344  192895 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:35:20.025453  192895 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:35:20.025588  192895 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:35:20.025687  192895 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:35:20.064156  192895 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:35:20.065595  192895 start.go:297] selected driver: kvm2
	I1028 12:35:20.065615  192895 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:35:20.065635  192895 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:35:20.066315  192895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:35:20.066401  192895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:35:20.082503  192895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:35:20.082552  192895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1028 12:35:20.082626  192895 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1028 12:35:20.082859  192895 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1028 12:35:20.082895  192895 cni.go:84] Creating CNI manager for ""
	I1028 12:35:20.082948  192895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:35:20.082960  192895 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:35:20.083004  192895 start.go:340] cluster config:
	{Name:newest-cni-604556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-604556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:35:20.083105  192895 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:35:20.085067  192895 out.go:177] * Starting "newest-cni-604556" primary control-plane node in "newest-cni-604556" cluster
	I1028 12:35:20.086443  192895 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:35:20.086503  192895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:35:20.086517  192895 cache.go:56] Caching tarball of preloaded images
	I1028 12:35:20.086611  192895 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:35:20.086625  192895 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:35:20.086747  192895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/config.json ...
	I1028 12:35:20.086777  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/config.json: {Name:mk700e7db66f18b4d3170ae944d59c052e20edf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:20.086957  192895 start.go:360] acquireMachinesLock for newest-cni-604556: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:35:20.087024  192895 start.go:364] duration metric: took 50.321µs to acquireMachinesLock for "newest-cni-604556"
	I1028 12:35:20.087051  192895 start.go:93] Provisioning new machine with config: &{Name:newest-cni-604556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-604556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:35:20.087156  192895 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 12:35:20.088813  192895 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1028 12:35:20.088998  192895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:35:20.089046  192895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:35:20.104810  192895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I1028 12:35:20.105235  192895 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:35:20.105860  192895 main.go:141] libmachine: Using API Version  1
	I1028 12:35:20.105911  192895 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:35:20.106323  192895 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:35:20.106527  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetMachineName
	I1028 12:35:20.106785  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:20.106984  192895 start.go:159] libmachine.API.Create for "newest-cni-604556" (driver="kvm2")
	I1028 12:35:20.107014  192895 client.go:168] LocalClient.Create starting
	I1028 12:35:20.107070  192895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem
	I1028 12:35:20.107112  192895 main.go:141] libmachine: Decoding PEM data...
	I1028 12:35:20.107137  192895 main.go:141] libmachine: Parsing certificate...
	I1028 12:35:20.107219  192895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem
	I1028 12:35:20.107248  192895 main.go:141] libmachine: Decoding PEM data...
	I1028 12:35:20.107267  192895 main.go:141] libmachine: Parsing certificate...
	I1028 12:35:20.107290  192895 main.go:141] libmachine: Running pre-create checks...
	I1028 12:35:20.107305  192895 main.go:141] libmachine: (newest-cni-604556) Calling .PreCreateCheck
	I1028 12:35:20.107754  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetConfigRaw
	I1028 12:35:20.108159  192895 main.go:141] libmachine: Creating machine...
	I1028 12:35:20.108172  192895 main.go:141] libmachine: (newest-cni-604556) Calling .Create
	I1028 12:35:20.108379  192895 main.go:141] libmachine: (newest-cni-604556) Creating KVM machine...
	I1028 12:35:20.109781  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found existing default KVM network
	I1028 12:35:20.110989  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.110827  192918 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:28:1d} reservation:<nil>}
	I1028 12:35:20.111896  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.111788  192918 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2a:04:62} reservation:<nil>}
	I1028 12:35:20.113070  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.112947  192918 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a50d0}
	I1028 12:35:20.113115  192895 main.go:141] libmachine: (newest-cni-604556) DBG | created network xml: 
	I1028 12:35:20.113134  192895 main.go:141] libmachine: (newest-cni-604556) DBG | <network>
	I1028 12:35:20.113150  192895 main.go:141] libmachine: (newest-cni-604556) DBG |   <name>mk-newest-cni-604556</name>
	I1028 12:35:20.113163  192895 main.go:141] libmachine: (newest-cni-604556) DBG |   <dns enable='no'/>
	I1028 12:35:20.113171  192895 main.go:141] libmachine: (newest-cni-604556) DBG |   
	I1028 12:35:20.113182  192895 main.go:141] libmachine: (newest-cni-604556) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1028 12:35:20.113193  192895 main.go:141] libmachine: (newest-cni-604556) DBG |     <dhcp>
	I1028 12:35:20.113203  192895 main.go:141] libmachine: (newest-cni-604556) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1028 12:35:20.113213  192895 main.go:141] libmachine: (newest-cni-604556) DBG |     </dhcp>
	I1028 12:35:20.113226  192895 main.go:141] libmachine: (newest-cni-604556) DBG |   </ip>
	I1028 12:35:20.113238  192895 main.go:141] libmachine: (newest-cni-604556) DBG |   
	I1028 12:35:20.113288  192895 main.go:141] libmachine: (newest-cni-604556) DBG | </network>
	I1028 12:35:20.113335  192895 main.go:141] libmachine: (newest-cni-604556) DBG | 
	I1028 12:35:20.119301  192895 main.go:141] libmachine: (newest-cni-604556) DBG | trying to create private KVM network mk-newest-cni-604556 192.168.61.0/24...
	I1028 12:35:20.196819  192895 main.go:141] libmachine: (newest-cni-604556) Setting up store path in /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556 ...
	I1028 12:35:20.196856  192895 main.go:141] libmachine: (newest-cni-604556) Building disk image from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 12:35:20.196867  192895 main.go:141] libmachine: (newest-cni-604556) DBG | private KVM network mk-newest-cni-604556 192.168.61.0/24 created
	I1028 12:35:20.196885  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.196737  192918 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:35:20.196908  192895 main.go:141] libmachine: (newest-cni-604556) Downloading /home/jenkins/minikube-integration/19876-132631/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1028 12:35:20.488688  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.488549  192918 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa...
	I1028 12:35:20.722962  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.722811  192918 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/newest-cni-604556.rawdisk...
	I1028 12:35:20.722992  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Writing magic tar header
	I1028 12:35:20.723021  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Writing SSH key tar header
	I1028 12:35:20.723059  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:20.722997  192918 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556 ...
	I1028 12:35:20.723177  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556
	I1028 12:35:20.723225  192895 main.go:141] libmachine: (newest-cni-604556) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556 (perms=drwx------)
	I1028 12:35:20.723244  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube/machines
	I1028 12:35:20.723264  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:35:20.723278  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19876-132631
	I1028 12:35:20.723320  192895 main.go:141] libmachine: (newest-cni-604556) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube/machines (perms=drwxr-xr-x)
	I1028 12:35:20.723337  192895 main.go:141] libmachine: (newest-cni-604556) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631/.minikube (perms=drwxr-xr-x)
	I1028 12:35:20.723348  192895 main.go:141] libmachine: (newest-cni-604556) Setting executable bit set on /home/jenkins/minikube-integration/19876-132631 (perms=drwxrwxr-x)
	I1028 12:35:20.723409  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1028 12:35:20.723440  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home/jenkins
	I1028 12:35:20.723447  192895 main.go:141] libmachine: (newest-cni-604556) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1028 12:35:20.723462  192895 main.go:141] libmachine: (newest-cni-604556) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1028 12:35:20.723469  192895 main.go:141] libmachine: (newest-cni-604556) Creating domain...
	I1028 12:35:20.723475  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Checking permissions on dir: /home
	I1028 12:35:20.723483  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Skipping /home - not owner
	I1028 12:35:20.724707  192895 main.go:141] libmachine: (newest-cni-604556) define libvirt domain using xml: 
	I1028 12:35:20.724739  192895 main.go:141] libmachine: (newest-cni-604556) <domain type='kvm'>
	I1028 12:35:20.724764  192895 main.go:141] libmachine: (newest-cni-604556)   <name>newest-cni-604556</name>
	I1028 12:35:20.724781  192895 main.go:141] libmachine: (newest-cni-604556)   <memory unit='MiB'>2200</memory>
	I1028 12:35:20.724793  192895 main.go:141] libmachine: (newest-cni-604556)   <vcpu>2</vcpu>
	I1028 12:35:20.724806  192895 main.go:141] libmachine: (newest-cni-604556)   <features>
	I1028 12:35:20.724845  192895 main.go:141] libmachine: (newest-cni-604556)     <acpi/>
	I1028 12:35:20.724874  192895 main.go:141] libmachine: (newest-cni-604556)     <apic/>
	I1028 12:35:20.724884  192895 main.go:141] libmachine: (newest-cni-604556)     <pae/>
	I1028 12:35:20.724888  192895 main.go:141] libmachine: (newest-cni-604556)     
	I1028 12:35:20.724894  192895 main.go:141] libmachine: (newest-cni-604556)   </features>
	I1028 12:35:20.724900  192895 main.go:141] libmachine: (newest-cni-604556)   <cpu mode='host-passthrough'>
	I1028 12:35:20.724905  192895 main.go:141] libmachine: (newest-cni-604556)   
	I1028 12:35:20.724911  192895 main.go:141] libmachine: (newest-cni-604556)   </cpu>
	I1028 12:35:20.724916  192895 main.go:141] libmachine: (newest-cni-604556)   <os>
	I1028 12:35:20.724921  192895 main.go:141] libmachine: (newest-cni-604556)     <type>hvm</type>
	I1028 12:35:20.724934  192895 main.go:141] libmachine: (newest-cni-604556)     <boot dev='cdrom'/>
	I1028 12:35:20.724944  192895 main.go:141] libmachine: (newest-cni-604556)     <boot dev='hd'/>
	I1028 12:35:20.724950  192895 main.go:141] libmachine: (newest-cni-604556)     <bootmenu enable='no'/>
	I1028 12:35:20.724958  192895 main.go:141] libmachine: (newest-cni-604556)   </os>
	I1028 12:35:20.724966  192895 main.go:141] libmachine: (newest-cni-604556)   <devices>
	I1028 12:35:20.724973  192895 main.go:141] libmachine: (newest-cni-604556)     <disk type='file' device='cdrom'>
	I1028 12:35:20.724990  192895 main.go:141] libmachine: (newest-cni-604556)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/boot2docker.iso'/>
	I1028 12:35:20.724999  192895 main.go:141] libmachine: (newest-cni-604556)       <target dev='hdc' bus='scsi'/>
	I1028 12:35:20.725004  192895 main.go:141] libmachine: (newest-cni-604556)       <readonly/>
	I1028 12:35:20.725011  192895 main.go:141] libmachine: (newest-cni-604556)     </disk>
	I1028 12:35:20.725017  192895 main.go:141] libmachine: (newest-cni-604556)     <disk type='file' device='disk'>
	I1028 12:35:20.725027  192895 main.go:141] libmachine: (newest-cni-604556)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1028 12:35:20.725038  192895 main.go:141] libmachine: (newest-cni-604556)       <source file='/home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/newest-cni-604556.rawdisk'/>
	I1028 12:35:20.725047  192895 main.go:141] libmachine: (newest-cni-604556)       <target dev='hda' bus='virtio'/>
	I1028 12:35:20.725055  192895 main.go:141] libmachine: (newest-cni-604556)     </disk>
	I1028 12:35:20.725066  192895 main.go:141] libmachine: (newest-cni-604556)     <interface type='network'>
	I1028 12:35:20.725076  192895 main.go:141] libmachine: (newest-cni-604556)       <source network='mk-newest-cni-604556'/>
	I1028 12:35:20.725090  192895 main.go:141] libmachine: (newest-cni-604556)       <model type='virtio'/>
	I1028 12:35:20.725098  192895 main.go:141] libmachine: (newest-cni-604556)     </interface>
	I1028 12:35:20.725103  192895 main.go:141] libmachine: (newest-cni-604556)     <interface type='network'>
	I1028 12:35:20.725111  192895 main.go:141] libmachine: (newest-cni-604556)       <source network='default'/>
	I1028 12:35:20.725115  192895 main.go:141] libmachine: (newest-cni-604556)       <model type='virtio'/>
	I1028 12:35:20.725124  192895 main.go:141] libmachine: (newest-cni-604556)     </interface>
	I1028 12:35:20.725129  192895 main.go:141] libmachine: (newest-cni-604556)     <serial type='pty'>
	I1028 12:35:20.725136  192895 main.go:141] libmachine: (newest-cni-604556)       <target port='0'/>
	I1028 12:35:20.725143  192895 main.go:141] libmachine: (newest-cni-604556)     </serial>
	I1028 12:35:20.725155  192895 main.go:141] libmachine: (newest-cni-604556)     <console type='pty'>
	I1028 12:35:20.725170  192895 main.go:141] libmachine: (newest-cni-604556)       <target type='serial' port='0'/>
	I1028 12:35:20.725181  192895 main.go:141] libmachine: (newest-cni-604556)     </console>
	I1028 12:35:20.725187  192895 main.go:141] libmachine: (newest-cni-604556)     <rng model='virtio'>
	I1028 12:35:20.725193  192895 main.go:141] libmachine: (newest-cni-604556)       <backend model='random'>/dev/random</backend>
	I1028 12:35:20.725198  192895 main.go:141] libmachine: (newest-cni-604556)     </rng>
	I1028 12:35:20.725203  192895 main.go:141] libmachine: (newest-cni-604556)     
	I1028 12:35:20.725212  192895 main.go:141] libmachine: (newest-cni-604556)     
	I1028 12:35:20.725217  192895 main.go:141] libmachine: (newest-cni-604556)   </devices>
	I1028 12:35:20.725222  192895 main.go:141] libmachine: (newest-cni-604556) </domain>
	I1028 12:35:20.725231  192895 main.go:141] libmachine: (newest-cni-604556) 
	I1028 12:35:20.730016  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:2d:ec:79 in network default
	I1028 12:35:20.730623  192895 main.go:141] libmachine: (newest-cni-604556) Ensuring networks are active...
	I1028 12:35:20.730645  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:20.731276  192895 main.go:141] libmachine: (newest-cni-604556) Ensuring network default is active
	I1028 12:35:20.731568  192895 main.go:141] libmachine: (newest-cni-604556) Ensuring network mk-newest-cni-604556 is active
	I1028 12:35:20.732043  192895 main.go:141] libmachine: (newest-cni-604556) Getting domain xml...
	I1028 12:35:20.732694  192895 main.go:141] libmachine: (newest-cni-604556) Creating domain...
	I1028 12:35:22.029594  192895 main.go:141] libmachine: (newest-cni-604556) Waiting to get IP...
	I1028 12:35:22.030492  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:22.031013  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:22.031052  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:22.030978  192918 retry.go:31] will retry after 220.966357ms: waiting for machine to come up
	I1028 12:35:22.253568  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:22.254193  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:22.254225  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:22.254143  192918 retry.go:31] will retry after 256.739308ms: waiting for machine to come up
	I1028 12:35:22.512845  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:22.513401  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:22.513425  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:22.513357  192918 retry.go:31] will retry after 414.797404ms: waiting for machine to come up
	I1028 12:35:22.930155  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:22.930737  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:22.930806  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:22.930702  192918 retry.go:31] will retry after 515.634791ms: waiting for machine to come up
	I1028 12:35:23.448482  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:23.448997  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:23.449027  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:23.448954  192918 retry.go:31] will retry after 702.447383ms: waiting for machine to come up
	I1028 12:35:24.152743  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:24.153193  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:24.153221  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:24.153134  192918 retry.go:31] will retry after 839.174761ms: waiting for machine to come up
	I1028 12:35:24.994367  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:24.994922  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:24.994956  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:24.994853  192918 retry.go:31] will retry after 997.537311ms: waiting for machine to come up
	I1028 12:35:25.993513  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:25.994023  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:25.994058  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:25.993982  192918 retry.go:31] will retry after 1.274593437s: waiting for machine to come up
	I1028 12:35:27.269987  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:27.270458  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:27.270489  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:27.270428  192918 retry.go:31] will retry after 1.324134428s: waiting for machine to come up
	I1028 12:35:28.596833  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:28.597267  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:28.597291  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:28.597205  192918 retry.go:31] will retry after 1.855907355s: waiting for machine to come up
	I1028 12:35:30.455279  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:30.455868  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:30.455897  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:30.455813  192918 retry.go:31] will retry after 2.293359304s: waiting for machine to come up
	I1028 12:35:32.750767  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:32.751283  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:32.751312  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:32.751225  192918 retry.go:31] will retry after 2.473495916s: waiting for machine to come up
	I1028 12:35:35.226136  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:35.226557  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:35.226598  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:35.226537  192918 retry.go:31] will retry after 3.305982827s: waiting for machine to come up
	I1028 12:35:38.535408  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:38.535903  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find current IP address of domain newest-cni-604556 in network mk-newest-cni-604556
	I1028 12:35:38.535936  192895 main.go:141] libmachine: (newest-cni-604556) DBG | I1028 12:35:38.535849  192918 retry.go:31] will retry after 4.046753468s: waiting for machine to come up
	I1028 12:35:42.583846  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.584451  192895 main.go:141] libmachine: (newest-cni-604556) Found IP for machine: 192.168.61.123
	I1028 12:35:42.584505  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has current primary IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.584519  192895 main.go:141] libmachine: (newest-cni-604556) Reserving static IP address...
	I1028 12:35:42.585059  192895 main.go:141] libmachine: (newest-cni-604556) DBG | unable to find host DHCP lease matching {name: "newest-cni-604556", mac: "52:54:00:1d:cf:4d", ip: "192.168.61.123"} in network mk-newest-cni-604556
	I1028 12:35:42.667887  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Getting to WaitForSSH function...
	I1028 12:35:42.667917  192895 main.go:141] libmachine: (newest-cni-604556) Reserved static IP address: 192.168.61.123
	I1028 12:35:42.667947  192895 main.go:141] libmachine: (newest-cni-604556) Waiting for SSH to be available...
	I1028 12:35:42.670952  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.671635  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:42.671675  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.671970  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Using SSH client type: external
	I1028 12:35:42.672027  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa (-rw-------)
	I1028 12:35:42.672094  192895 main.go:141] libmachine: (newest-cni-604556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:35:42.672121  192895 main.go:141] libmachine: (newest-cni-604556) DBG | About to run SSH command:
	I1028 12:35:42.672139  192895 main.go:141] libmachine: (newest-cni-604556) DBG | exit 0
	I1028 12:35:42.802454  192895 main.go:141] libmachine: (newest-cni-604556) DBG | SSH cmd err, output: <nil>: 
	I1028 12:35:42.802703  192895 main.go:141] libmachine: (newest-cni-604556) KVM machine creation complete!
	I1028 12:35:42.803056  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetConfigRaw
	I1028 12:35:42.803631  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:42.803813  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:42.803977  192895 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:35:42.803993  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetState
	I1028 12:35:42.805398  192895 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:35:42.805413  192895 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:35:42.805420  192895 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:35:42.805449  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:42.808330  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.808758  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:42.808800  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.809028  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:42.809211  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:42.809343  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:42.809451  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:42.809659  192895 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:42.809852  192895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1028 12:35:42.809863  192895 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:35:42.920954  192895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:35:42.920977  192895 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:35:42.920985  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:42.923760  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.924099  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:42.924131  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:42.924246  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:42.924470  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:42.924642  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:42.924809  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:42.924967  192895 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:42.925168  192895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1028 12:35:42.925181  192895 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:35:43.034457  192895 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:35:43.034562  192895 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:35:43.034573  192895 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:35:43.034581  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetMachineName
	I1028 12:35:43.034835  192895 buildroot.go:166] provisioning hostname "newest-cni-604556"
	I1028 12:35:43.034864  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetMachineName
	I1028 12:35:43.035091  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:43.038205  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.038705  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:43.038736  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.038888  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:43.039091  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.039263  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.039410  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:43.039541  192895 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:43.039705  192895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1028 12:35:43.039716  192895 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-604556 && echo "newest-cni-604556" | sudo tee /etc/hostname
	I1028 12:35:43.171743  192895 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-604556
	
	I1028 12:35:43.171780  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:43.175107  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.175492  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:43.175551  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.175700  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:43.175904  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.176110  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.176245  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:43.176446  192895 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:43.176691  192895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1028 12:35:43.176718  192895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-604556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-604556/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-604556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:35:43.299643  192895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:35:43.299677  192895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:35:43.299731  192895 buildroot.go:174] setting up certificates
	I1028 12:35:43.299746  192895 provision.go:84] configureAuth start
	I1028 12:35:43.299763  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetMachineName
	I1028 12:35:43.300077  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetIP
	I1028 12:35:43.302725  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.303084  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:43.303116  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.303299  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:43.305883  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.306232  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:43.306259  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.306380  192895 provision.go:143] copyHostCerts
	I1028 12:35:43.306471  192895 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:35:43.306488  192895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:35:43.306573  192895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:35:43.306734  192895 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:35:43.306746  192895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:35:43.306786  192895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:35:43.306891  192895 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:35:43.306901  192895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:35:43.306943  192895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:35:43.307038  192895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.newest-cni-604556 san=[127.0.0.1 192.168.61.123 localhost minikube newest-cni-604556]
	I1028 12:35:43.600035  192895 provision.go:177] copyRemoteCerts
	I1028 12:35:43.600099  192895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:35:43.600125  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:43.603009  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.603277  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:43.603307  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.603441  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:43.603666  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.603866  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:43.604002  192895 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa Username:docker}
	I1028 12:35:43.693604  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:35:43.718780  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:35:43.745490  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:35:43.772598  192895 provision.go:87] duration metric: took 472.822944ms to configureAuth
	I1028 12:35:43.772645  192895 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:35:43.772880  192895 config.go:182] Loaded profile config "newest-cni-604556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:35:43.772968  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:43.775939  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.776359  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:43.776391  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:43.776595  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:43.776820  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.777039  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:43.777244  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:43.777450  192895 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:43.777664  192895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1028 12:35:43.777680  192895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:35:44.020500  192895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:35:44.020537  192895 main.go:141] libmachine: Checking connection to Docker...
	I1028 12:35:44.020547  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetURL
	I1028 12:35:44.021960  192895 main.go:141] libmachine: (newest-cni-604556) DBG | Using libvirt version 6000000
	I1028 12:35:44.024662  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.025039  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.025072  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.025245  192895 main.go:141] libmachine: Docker is up and running!
	I1028 12:35:44.025261  192895 main.go:141] libmachine: Reticulating splines...
	I1028 12:35:44.025270  192895 client.go:171] duration metric: took 23.918244292s to LocalClient.Create
	I1028 12:35:44.025300  192895 start.go:167] duration metric: took 23.918316628s to libmachine.API.Create "newest-cni-604556"
	I1028 12:35:44.025315  192895 start.go:293] postStartSetup for "newest-cni-604556" (driver="kvm2")
	I1028 12:35:44.025328  192895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:35:44.025374  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:44.025630  192895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:35:44.025653  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:44.028308  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.028750  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.028781  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.028956  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:44.029132  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:44.029265  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:44.029442  192895 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa Username:docker}
	I1028 12:35:44.118813  192895 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:35:44.123548  192895 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:35:44.123577  192895 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:35:44.123648  192895 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:35:44.123736  192895 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:35:44.123834  192895 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:35:44.136504  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:35:44.165906  192895 start.go:296] duration metric: took 140.573765ms for postStartSetup
	I1028 12:35:44.165959  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetConfigRaw
	I1028 12:35:44.166744  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetIP
	I1028 12:35:44.170196  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.170742  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.170773  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.171076  192895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/config.json ...
	I1028 12:35:44.171375  192895 start.go:128] duration metric: took 24.084199733s to createHost
	I1028 12:35:44.171411  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:44.174067  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.174463  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.174496  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.174618  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:44.174820  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:44.174999  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:44.175196  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:44.175382  192895 main.go:141] libmachine: Using SSH client type: native
	I1028 12:35:44.175541  192895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1028 12:35:44.175568  192895 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:35:44.290970  192895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730118944.262403567
	
	I1028 12:35:44.290996  192895 fix.go:216] guest clock: 1730118944.262403567
	I1028 12:35:44.291006  192895 fix.go:229] Guest: 2024-10-28 12:35:44.262403567 +0000 UTC Remote: 2024-10-28 12:35:44.171395148 +0000 UTC m=+24.199725095 (delta=91.008419ms)
	I1028 12:35:44.291047  192895 fix.go:200] guest clock delta is within tolerance: 91.008419ms
	I1028 12:35:44.291059  192895 start.go:83] releasing machines lock for "newest-cni-604556", held for 24.204021704s
	I1028 12:35:44.291089  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:44.291327  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetIP
	I1028 12:35:44.294052  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.294406  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.294437  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.294596  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:44.295216  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:44.295410  192895 main.go:141] libmachine: (newest-cni-604556) Calling .DriverName
	I1028 12:35:44.295478  192895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:35:44.295526  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:44.295598  192895 ssh_runner.go:195] Run: cat /version.json
	I1028 12:35:44.295616  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHHostname
	I1028 12:35:44.298280  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.298439  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.298598  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.298616  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.298816  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:44.298917  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:44.299000  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:44.299032  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:44.299178  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHPort
	I1028 12:35:44.299242  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:44.299339  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHKeyPath
	I1028 12:35:44.299414  192895 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa Username:docker}
	I1028 12:35:44.299769  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetSSHUsername
	I1028 12:35:44.299941  192895 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/newest-cni-604556/id_rsa Username:docker}
	I1028 12:35:44.387492  192895 ssh_runner.go:195] Run: systemctl --version
	I1028 12:35:44.419205  192895 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:35:44.581216  192895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:35:44.587806  192895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:35:44.587870  192895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:35:44.604841  192895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:35:44.604875  192895 start.go:495] detecting cgroup driver to use...
	I1028 12:35:44.604957  192895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:35:44.622874  192895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:35:44.637812  192895 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:35:44.637876  192895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:35:44.652239  192895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:35:44.667947  192895 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:35:44.793805  192895 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:35:44.989387  192895 docker.go:233] disabling docker service ...
	I1028 12:35:44.989454  192895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:35:45.006223  192895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:35:45.022115  192895 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:35:45.151327  192895 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:35:45.279189  192895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:35:45.295264  192895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:35:45.315765  192895 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:35:45.315825  192895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.327172  192895 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:35:45.327242  192895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.339296  192895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.350772  192895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.361695  192895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:35:45.373207  192895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.385815  192895 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.405763  192895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:35:45.417439  192895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:35:45.427336  192895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:35:45.427400  192895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:35:45.443834  192895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:35:45.455600  192895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:35:45.584499  192895 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:35:45.699875  192895 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:35:45.699947  192895 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:35:45.705088  192895 start.go:563] Will wait 60s for crictl version
	I1028 12:35:45.705165  192895 ssh_runner.go:195] Run: which crictl
	I1028 12:35:45.709334  192895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:35:45.756068  192895 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:35:45.756164  192895 ssh_runner.go:195] Run: crio --version
	I1028 12:35:45.786995  192895 ssh_runner.go:195] Run: crio --version
	I1028 12:35:45.820051  192895 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:35:45.821345  192895 main.go:141] libmachine: (newest-cni-604556) Calling .GetIP
	I1028 12:35:45.824279  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:45.824610  192895 main.go:141] libmachine: (newest-cni-604556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:cf:4d", ip: ""} in network mk-newest-cni-604556: {Iface:virbr1 ExpiryTime:2024-10-28 13:35:36 +0000 UTC Type:0 Mac:52:54:00:1d:cf:4d Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:newest-cni-604556 Clientid:01:52:54:00:1d:cf:4d}
	I1028 12:35:45.824634  192895 main.go:141] libmachine: (newest-cni-604556) DBG | domain newest-cni-604556 has defined IP address 192.168.61.123 and MAC address 52:54:00:1d:cf:4d in network mk-newest-cni-604556
	I1028 12:35:45.824805  192895 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:35:45.829373  192895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:35:45.844814  192895 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1028 12:35:45.846307  192895 kubeadm.go:883] updating cluster {Name:newest-cni-604556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-604556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:35:45.846426  192895 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:35:45.846492  192895 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:35:45.880865  192895 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:35:45.880946  192895 ssh_runner.go:195] Run: which lz4
	I1028 12:35:45.885463  192895 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:35:45.890142  192895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:35:45.890182  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:35:47.405005  192895 crio.go:462] duration metric: took 1.519566673s to copy over tarball
	I1028 12:35:47.405082  192895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:35:49.625522  192895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.220415599s)
	I1028 12:35:49.625561  192895 crio.go:469] duration metric: took 2.22052607s to extract the tarball
	I1028 12:35:49.625570  192895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:35:49.663734  192895 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:35:49.715386  192895 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:35:49.715416  192895 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:35:49.715427  192895 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.31.2 crio true true} ...
	I1028 12:35:49.715546  192895 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-604556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-604556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:35:49.715634  192895 ssh_runner.go:195] Run: crio config
	I1028 12:35:49.774386  192895 cni.go:84] Creating CNI manager for ""
	I1028 12:35:49.774414  192895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:35:49.774426  192895 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1028 12:35:49.774456  192895 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-604556 NodeName:newest-cni-604556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:35:49.774642  192895 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-604556"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:35:49.774732  192895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:35:49.788456  192895 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:35:49.788532  192895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:35:49.798689  192895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1028 12:35:49.818332  192895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:35:49.838883  192895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1028 12:35:49.857844  192895 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1028 12:35:49.861964  192895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:35:49.875199  192895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:35:50.010956  192895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:35:50.031167  192895 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556 for IP: 192.168.61.123
	I1028 12:35:50.031199  192895 certs.go:194] generating shared ca certs ...
	I1028 12:35:50.031222  192895 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.031414  192895 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:35:50.031472  192895 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:35:50.031487  192895 certs.go:256] generating profile certs ...
	I1028 12:35:50.031562  192895 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/client.key
	I1028 12:35:50.031580  192895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/client.crt with IP's: []
	I1028 12:35:50.110654  192895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/client.crt ...
	I1028 12:35:50.110682  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/client.crt: {Name:mka871985860cc196606121fef01efe6ff5480b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.110860  192895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/client.key ...
	I1028 12:35:50.110873  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/client.key: {Name:mk8f3bf46f794c292c0ef1f44bdc60421d4f5cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.110961  192895 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.key.1b1fe6f9
	I1028 12:35:50.110986  192895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.crt.1b1fe6f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.123]
	I1028 12:35:50.457891  192895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.crt.1b1fe6f9 ...
	I1028 12:35:50.457922  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.crt.1b1fe6f9: {Name:mk6dd1b05b6c207b90f1abba70bc89f0ec6afa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.458122  192895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.key.1b1fe6f9 ...
	I1028 12:35:50.458137  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.key.1b1fe6f9: {Name:mk75c278c5327fa00c5480090a8dc48a13a8d719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.458238  192895 certs.go:381] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.crt.1b1fe6f9 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.crt
	I1028 12:35:50.458311  192895 certs.go:385] copying /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.key.1b1fe6f9 -> /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.key
	I1028 12:35:50.458362  192895 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.key
	I1028 12:35:50.458376  192895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.crt with IP's: []
	I1028 12:35:50.520469  192895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.crt ...
	I1028 12:35:50.520510  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.crt: {Name:mkce84c4ef5c3fbfa0ff8e48115c8732a467be85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.520752  192895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.key ...
	I1028 12:35:50.520778  192895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.key: {Name:mk38b776470956d383ff238dd86398a659865e5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:35:50.521048  192895 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:35:50.521097  192895 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:35:50.521107  192895 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:35:50.521138  192895 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:35:50.521198  192895 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:35:50.521239  192895 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:35:50.521302  192895 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:35:50.522136  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:35:50.551504  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:35:50.578373  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:35:50.605101  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:35:50.631374  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:35:50.658071  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:35:50.690869  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:35:50.721732  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/newest-cni-604556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:35:50.751070  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:35:50.780331  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:35:50.810457  192895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:35:50.842180  192895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:35:50.869333  192895 ssh_runner.go:195] Run: openssl version
	I1028 12:35:50.876559  192895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:35:50.890860  192895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:35:50.896632  192895 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:35:50.896720  192895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:35:50.903688  192895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:35:50.917171  192895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:35:50.930023  192895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:35:50.935056  192895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:35:50.935126  192895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:35:50.941150  192895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:35:50.952340  192895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:35:50.963725  192895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:35:50.969039  192895 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:35:50.969097  192895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:35:50.975622  192895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:35:50.987713  192895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:35:50.992489  192895 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1028 12:35:50.992545  192895 kubeadm.go:392] StartCluster: {Name:newest-cni-604556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-604556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:35:50.992617  192895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:35:50.992670  192895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:35:51.034694  192895 cri.go:89] found id: ""
	I1028 12:35:51.034771  192895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:35:51.046371  192895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:35:51.058350  192895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:35:51.069051  192895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:35:51.069070  192895 kubeadm.go:157] found existing configuration files:
	
	I1028 12:35:51.069114  192895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:35:51.080274  192895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:35:51.080355  192895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:35:51.090559  192895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:35:51.101014  192895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:35:51.101097  192895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:35:51.111314  192895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:35:51.121926  192895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:35:51.122021  192895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:35:51.132407  192895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:35:51.142894  192895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:35:51.142965  192895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:35:51.153518  192895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:35:51.399810  192895 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.112903467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118960112879574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7bfb39a-1739-42cb-b9f2-8f97cbe7abea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.113669563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ffd3129-34ba-4883-9f1e-57d1d7cbbe96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.113723455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ffd3129-34ba-4883-9f1e-57d1d7cbbe96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.113916572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ffd3129-34ba-4883-9f1e-57d1d7cbbe96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.166423571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f222635c-c075-4518-8f59-e8de99e3f481 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.166533944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f222635c-c075-4518-8f59-e8de99e3f481 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.168219718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11c867a3-3d40-4ed5-82bb-b5374dd53e23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.168663175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118960168639636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11c867a3-3d40-4ed5-82bb-b5374dd53e23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.169459715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7da809c7-0f43-4b01-abb8-0c2c51322c35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.169556006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7da809c7-0f43-4b01-abb8-0c2c51322c35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.169801087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7da809c7-0f43-4b01-abb8-0c2c51322c35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.219404164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcc46994-b8aa-4ec5-aef0-e4ebe1ac4aa3 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.219508260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcc46994-b8aa-4ec5-aef0-e4ebe1ac4aa3 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.221785485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78ba1503-c399-4e53-92ce-03c8821d7543 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.222350913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118960222318232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78ba1503-c399-4e53-92ce-03c8821d7543 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.223559966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c933d066-cafa-4782-a496-5960887c0e24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.223638161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c933d066-cafa-4782-a496-5960887c0e24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.223900819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c933d066-cafa-4782-a496-5960887c0e24 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.271415405Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7476c597-292b-461b-8d58-29e379cc05f7 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.271493675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7476c597-292b-461b-8d58-29e379cc05f7 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.273817729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65a596a1-8024-45b9-a440-c52b9c4ab366 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.274419613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118960274383341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65a596a1-8024-45b9-a440-c52b9c4ab366 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.275265720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15939805-6099-4b5c-87d1-a27f9a673b38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.275500820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15939805-6099-4b5c-87d1-a27f9a673b38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:36:00 no-preload-871884 crio[701]: time="2024-10-28 12:36:00.277165703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1730117852265637465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd0b1cfaed8e317301e345e1380e4c8f691d16be55f60a8174e55e14348cf5,PodSandboxId:3de4d0044ee1509235d20e9c7826b58bfdeb7d7ed66e9adbc86411fcdd1bdee4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1730117832153494768,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6477bdaa-a202-4792-8bac-8a62b685f645,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71,PodSandboxId:0d6cfae4d63d5dd14d0ef8021ee38a17a03b57d15295048db723c2346ee0ee15,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730117828869841621,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dg2jd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88811f8d-8c45-4ef1-bbf1-8ca151e23d9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1,PodSandboxId:da89f953a1d95fa65398e96f0d4f80448ef37c72a041988b4040bde292d5b533,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1730117821627750148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
258c3a3-c7aa-476f-9802-a3e6accd6c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0,PodSandboxId:6acfae32b1e728c4c74e76773b32324192640b63117d573fcbda77727b7b69d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1730117821547400166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6rc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92def3e4-45f2-4daa-bd07-5366d364a0
70,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a,PodSandboxId:bee90f9d94d0f0741821a0be06b549d26a92f9d92e1f666eeec2a5c38117f3e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730117816786581473,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b72734e1118e90a3e1958d2d15622fd,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7,PodSandboxId:317e986bd949f52a752da60d8d43ef4d4c47aec994d660e525d20e57d03b6784,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730117816799766969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c792fffddb215c8221c3b823ad20352,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221,PodSandboxId:789363630bf5ce72260d96572c6cf0d2008fe42ae9d68c325cc3e01863f303cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730117816748322833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa0aea6a3f71fe70097f4d10ab396e3,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b,PodSandboxId:684d536158c9e09cc6c37e05af9a77fcd62786098c4f4baee59ad048e0be121e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730117816670840177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-871884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 402113625021a0c8ff4e05374d9ddd07,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15939805-6099-4b5c-87d1-a27f9a673b38 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8be2c80f222fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   da89f953a1d95       storage-provisioner
	1cfd0b1cfaed8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   3de4d0044ee15       busybox
	9a21fcd9e6d82       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      18 minutes ago      Running             coredns                   1                   0d6cfae4d63d5       coredns-7c65d6cfc9-dg2jd
	3576b8af85140       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       2                   da89f953a1d95       storage-provisioner
	1edb7fc86811a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      18 minutes ago      Running             kube-proxy                1                   6acfae32b1e72       kube-proxy-6rc4l
	d66cdd02dd211       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   317e986bd949f       etcd-no-preload-871884
	9473dbbdab672       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      19 minutes ago      Running             kube-scheduler            1                   bee90f9d94d0f       kube-scheduler-no-preload-871884
	6d5abde055384       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      19 minutes ago      Running             kube-apiserver            1                   789363630bf5c       kube-apiserver-no-preload-871884
	16a1ce9b3f38f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      19 minutes ago      Running             kube-controller-manager   1                   684d536158c9e       kube-controller-manager-no-preload-871884
	
	
	==> coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45001 - 56249 "HINFO IN 2374450671205517086.5057763595071633460. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028030753s
	
	
	==> describe nodes <==
	Name:               no-preload-871884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-871884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=no-preload-871884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_07_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:07:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-871884
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:32:49 +0000   Mon, 28 Oct 2024 12:07:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:32:49 +0000   Mon, 28 Oct 2024 12:07:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:32:49 +0000   Mon, 28 Oct 2024 12:07:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:32:49 +0000   Mon, 28 Oct 2024 12:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.156
	  Hostname:    no-preload-871884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac90635fe0f24ef7972af2d0c7fd5465
	  System UUID:                ac90635f-e0f2-4ef7-972a-f2d0c7fd5465
	  Boot ID:                    82ccc450-12db-4ea8-95eb-2b73f2d929bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-dg2jd                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-871884                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-871884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-871884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-6rc4l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-871884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-xr9lt              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-871884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-871884 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-871884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-871884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-871884 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-871884 event: Registered Node no-preload-871884 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-871884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-871884 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-871884 event: Registered Node no-preload-871884 in Controller
	
	
	==> dmesg <==
	[Oct28 12:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056198] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044787] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.203142] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.829748] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.647218] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.292031] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.070650] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068809] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.180329] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.131699] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.318490] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[ +16.558484] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.068179] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.836773] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[Oct28 12:17] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.415196] systemd-fstab-generator[2057]: Ignoring "noauto" option for root device
	[  +3.249930] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.089529] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] <==
	{"level":"info","ts":"2024-10-28T12:16:58.833470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-28T12:16:58.834386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-28T12:16:58.834700Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.156:2379"}
	{"level":"info","ts":"2024-10-28T12:26:58.860729Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":884}
	{"level":"info","ts":"2024-10-28T12:26:58.877475Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":884,"took":"16.238822ms","hash":1806140554,"current-db-size-bytes":2682880,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2682880,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-28T12:26:58.877567Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1806140554,"revision":884,"compact-revision":-1}
	{"level":"info","ts":"2024-10-28T12:31:58.868529Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1127}
	{"level":"info","ts":"2024-10-28T12:31:58.872263Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1127,"took":"3.34849ms","hash":3938152017,"current-db-size-bytes":2682880,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-28T12:31:58.872326Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3938152017,"revision":1127,"compact-revision":884}
	{"level":"info","ts":"2024-10-28T12:35:51.741103Z","caller":"traceutil/trace.go:171","msg":"trace[170177689] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"127.341724ms","start":"2024-10-28T12:35:51.613712Z","end":"2024-10-28T12:35:51.741054Z","steps":["trace[170177689] 'process raft request'  (duration: 127.168727ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:35:52.128672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.042491ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10016448481433304227 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0b0192d30ed53ca2>","response":"size:39"}
	{"level":"info","ts":"2024-10-28T12:35:52.128828Z","caller":"traceutil/trace.go:171","msg":"trace[2025822944] linearizableReadLoop","detail":"{readStateIndex:1832; appliedIndex:1831; }","duration":"309.966038ms","start":"2024-10-28T12:35:51.818851Z","end":"2024-10-28T12:35:52.128817Z","steps":["trace[2025822944] 'read index received'  (duration: 52.422158ms)","trace[2025822944] 'applied index is now lower than readState.Index'  (duration: 257.542807ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:35:52.128886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.043117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:35:52.128900Z","caller":"traceutil/trace.go:171","msg":"trace[415821554] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1558; }","duration":"310.071011ms","start":"2024-10-28T12:35:51.818824Z","end":"2024-10-28T12:35:52.128895Z","steps":["trace[415821554] 'agreement among raft nodes before linearized reading'  (duration: 310.026406ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:35:52.128925Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:35:51.818777Z","time spent":"310.135477ms","remote":"127.0.0.1:34556","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-28T12:35:52.129123Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:35:51.743533Z","time spent":"385.588039ms","remote":"127.0.0.1:34418","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-10-28T12:35:52.393959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.574025ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10016448481433304229 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.156\" mod_revision:1551 > success:<request_put:<key:\"/registry/masterleases/192.168.72.156\" value_size:67 lease:793076444578528418 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.156\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T12:35:52.394103Z","caller":"traceutil/trace.go:171","msg":"trace[459856823] linearizableReadLoop","detail":"{readStateIndex:1833; appliedIndex:1832; }","duration":"244.981092ms","start":"2024-10-28T12:35:52.149109Z","end":"2024-10-28T12:35:52.394091Z","steps":["trace[459856823] 'read index received'  (duration: 110.476163ms)","trace[459856823] 'applied index is now lower than readState.Index'  (duration: 134.50385ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T12:35:52.394208Z","caller":"traceutil/trace.go:171","msg":"trace[277393381] transaction","detail":"{read_only:false; response_revision:1559; number_of_response:1; }","duration":"264.312042ms","start":"2024-10-28T12:35:52.129880Z","end":"2024-10-28T12:35:52.394192Z","steps":["trace[277393381] 'process raft request'  (duration: 129.765397ms)","trace[277393381] 'compare'  (duration: 133.451276ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:35:52.394224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.103172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:35:52.394353Z","caller":"traceutil/trace.go:171","msg":"trace[1904509750] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1559; }","duration":"245.236872ms","start":"2024-10-28T12:35:52.149104Z","end":"2024-10-28T12:35:52.394341Z","steps":["trace[1904509750] 'agreement among raft nodes before linearized reading'  (duration: 245.024328ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:35:56.002268Z","caller":"traceutil/trace.go:171","msg":"trace[1261274825] linearizableReadLoop","detail":"{readStateIndex:1837; appliedIndex:1836; }","duration":"183.569204ms","start":"2024-10-28T12:35:55.818681Z","end":"2024-10-28T12:35:56.002250Z","steps":["trace[1261274825] 'read index received'  (duration: 183.409609ms)","trace[1261274825] 'applied index is now lower than readState.Index'  (duration: 158.624µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:35:56.002441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.737547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:35:56.002490Z","caller":"traceutil/trace.go:171","msg":"trace[1337630499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1563; }","duration":"183.803677ms","start":"2024-10-28T12:35:55.818677Z","end":"2024-10-28T12:35:56.002481Z","steps":["trace[1337630499] 'agreement among raft nodes before linearized reading'  (duration: 183.694641ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:35:56.002830Z","caller":"traceutil/trace.go:171","msg":"trace[1503574089] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"243.445794ms","start":"2024-10-28T12:35:55.759369Z","end":"2024-10-28T12:35:56.002815Z","steps":["trace[1503574089] 'process raft request'  (duration: 242.771089ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:36:00 up 19 min,  0 users,  load average: 0.03, 0.10, 0.08
	Linux no-preload-871884 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] <==
	W1028 12:32:01.174476       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:32:01.174613       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:32:01.175768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:32:01.175850       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:33:01.176632       1 handler_proxy.go:99] no RequestInfo found in the context
	W1028 12:33:01.176703       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:33:01.177118       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1028 12:33:01.177401       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:33:01.178484       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:33:01.178559       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:35:01.178705       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:35:01.178921       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 12:35:01.178963       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:35:01.178983       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 12:35:01.180176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:35:01.180300       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] <==
	E1028 12:30:33.958861       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:30:34.428478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:31:03.966449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:31:04.439743       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:31:33.972093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:31:34.448441       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:32:03.978898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:32:04.456120       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:32:33.985725       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:32:34.464912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:32:49.219270       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-871884"
	E1028 12:33:03.994753       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:33:04.472753       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:33:21.086067       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="293.479µs"
	E1028 12:33:34.001055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:33:34.480164       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:33:36.085931       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="165.367µs"
	E1028 12:34:04.008296       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:34:04.489801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:34:34.014267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:34:34.498392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:35:04.020886       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:35:04.508843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:35:34.026714       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:35:34.516939       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:17:01.779432       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:17:01.792752       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.156"]
	E1028 12:17:01.792840       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:17:01.831185       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:17:01.831230       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:17:01.831266       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:17:01.833698       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:17:01.834202       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:17:01.834238       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:17:01.836164       1 config.go:199] "Starting service config controller"
	I1028 12:17:01.836204       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:17:01.836234       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:17:01.836258       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:17:01.837063       1 config.go:328] "Starting node config controller"
	I1028 12:17:01.837095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:17:01.937063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 12:17:01.937115       1 shared_informer.go:320] Caches are synced for node config
	I1028 12:17:01.937126       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] <==
	I1028 12:16:58.062633       1 serving.go:386] Generated self-signed cert in-memory
	W1028 12:17:00.173613       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1028 12:17:00.173719       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1028 12:17:00.173752       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1028 12:17:00.173775       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1028 12:17:00.192176       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1028 12:17:00.192270       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:17:00.194383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1028 12:17:00.194557       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1028 12:17:00.194620       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1028 12:17:00.194692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1028 12:17:00.295175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:34:56 no-preload-871884 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:34:56 no-preload-871884 kubelet[1427]: E1028 12:34:56.346313    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118896344145203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:34:56 no-preload-871884 kubelet[1427]: E1028 12:34:56.347102    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118896344145203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:34:58 no-preload-871884 kubelet[1427]: E1028 12:34:58.065438    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:35:06 no-preload-871884 kubelet[1427]: E1028 12:35:06.349677    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118906349439036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:06 no-preload-871884 kubelet[1427]: E1028 12:35:06.349702    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118906349439036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:11 no-preload-871884 kubelet[1427]: E1028 12:35:11.065634    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:35:16 no-preload-871884 kubelet[1427]: E1028 12:35:16.350833    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118916350422417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:16 no-preload-871884 kubelet[1427]: E1028 12:35:16.351186    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118916350422417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:25 no-preload-871884 kubelet[1427]: E1028 12:35:25.065181    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:35:26 no-preload-871884 kubelet[1427]: E1028 12:35:26.352609    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118926352316988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:26 no-preload-871884 kubelet[1427]: E1028 12:35:26.352857    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118926352316988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:36 no-preload-871884 kubelet[1427]: E1028 12:35:36.354948    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118936354730098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:36 no-preload-871884 kubelet[1427]: E1028 12:35:36.355043    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118936354730098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:39 no-preload-871884 kubelet[1427]: E1028 12:35:39.065674    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:35:46 no-preload-871884 kubelet[1427]: E1028 12:35:46.357112    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118946356658479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:46 no-preload-871884 kubelet[1427]: E1028 12:35:46.357169    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118946356658479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:53 no-preload-871884 kubelet[1427]: E1028 12:35:53.066318    1427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xr9lt" podUID="62926d83-9891-4dec-b0ed-a1fa87e0dd28"
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]: E1028 12:35:56.089210    1427 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]: E1028 12:35:56.360281    1427 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118956358302564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:35:56 no-preload-871884 kubelet[1427]: E1028 12:35:56.360342    1427 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118956358302564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] <==
	I1028 12:17:01.751658       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1028 12:17:31.754707       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] <==
	I1028 12:17:32.376322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:17:32.399429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:17:32.399510       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:17:49.801847       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:17:49.802133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-871884_69447087-feff-4949-a3e4-b8b1c4a352ae!
	I1028 12:17:49.802315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcd4864f-0556-4b38-ba15-d73472c15cbf", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-871884_69447087-feff-4949-a3e4-b8b1c4a352ae became leader
	I1028 12:17:49.904977       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-871884_69447087-feff-4949-a3e4-b8b1c4a352ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-871884 -n no-preload-871884
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-871884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xr9lt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-871884 describe pod metrics-server-6867b74b74-xr9lt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-871884 describe pod metrics-server-6867b74b74-xr9lt: exit status 1 (70.94015ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xr9lt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-871884 describe pod metrics-server-6867b74b74-xr9lt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (493.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-28 12:38:48.260613852 +0000 UTC m=+6250.928838237
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.884µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-349222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-349222 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-349222 logs -n 25: (1.956653855s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo docker                        | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo cat                           | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo                               | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo find                          | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-903216 sudo crio                          | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-903216                                    | kindnet-903216            | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC | 28 Oct 24 12:38 UTC |
	| start   | -p enable-default-cni-903216                         | enable-default-cni-903216 | jenkins | v1.34.0 | 28 Oct 24 12:38 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:38:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:38:46.504910  198445 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:38:46.505037  198445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:38:46.505046  198445 out.go:358] Setting ErrFile to fd 2...
	I1028 12:38:46.505050  198445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:38:46.505242  198445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:38:46.505923  198445 out.go:352] Setting JSON to false
	I1028 12:38:46.507086  198445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8469,"bootTime":1730110657,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:38:46.507193  198445 start.go:139] virtualization: kvm guest
	I1028 12:38:46.509711  198445 out.go:177] * [enable-default-cni-903216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:38:46.511232  198445 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:38:46.511258  198445 notify.go:220] Checking for updates...
	I1028 12:38:46.514222  198445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:38:46.515762  198445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:38:46.517211  198445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:38:46.518607  198445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:38:46.519848  198445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:38:46.521713  198445 config.go:182] Loaded profile config "calico-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:38:46.521847  198445 config.go:182] Loaded profile config "custom-flannel-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:38:46.521966  198445 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:38:46.522074  198445 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:38:46.563601  198445 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:38:46.565171  198445 start.go:297] selected driver: kvm2
	I1028 12:38:46.565188  198445 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:38:46.565216  198445 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:38:46.566026  198445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:38:46.566149  198445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:38:46.587654  198445 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:38:46.587713  198445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E1028 12:38:46.587964  198445 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1028 12:38:46.587994  198445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:38:46.588030  198445 cni.go:84] Creating CNI manager for "bridge"
	I1028 12:38:46.588052  198445 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 12:38:46.588121  198445 start.go:340] cluster config:
	{Name:enable-default-cni-903216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-903216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:38:46.588244  198445 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:38:46.590337  198445 out.go:177] * Starting "enable-default-cni-903216" primary control-plane node in "enable-default-cni-903216" cluster
	I1028 12:38:45.022581  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.023221  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has current primary IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.023252  196623 main.go:141] libmachine: (custom-flannel-903216) Found IP for machine: 192.168.39.192
	I1028 12:38:45.023276  196623 main.go:141] libmachine: (custom-flannel-903216) Reserving static IP address...
	I1028 12:38:45.023622  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | unable to find host DHCP lease matching {name: "custom-flannel-903216", mac: "52:54:00:87:47:b4", ip: "192.168.39.192"} in network mk-custom-flannel-903216
	I1028 12:38:45.118285  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | Getting to WaitForSSH function...
	I1028 12:38:45.118318  196623 main.go:141] libmachine: (custom-flannel-903216) Reserved static IP address: 192.168.39.192
	I1028 12:38:45.118327  196623 main.go:141] libmachine: (custom-flannel-903216) Waiting for SSH to be available...
	I1028 12:38:45.126555  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.127192  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:45.127222  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.127354  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | Using SSH client type: external
	I1028 12:38:45.127379  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/custom-flannel-903216/id_rsa (-rw-------)
	I1028 12:38:45.127427  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/custom-flannel-903216/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:38:45.127436  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | About to run SSH command:
	I1028 12:38:45.127448  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | exit 0
	I1028 12:38:45.262524  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | SSH cmd err, output: <nil>: 
	I1028 12:38:45.262907  196623 main.go:141] libmachine: (custom-flannel-903216) KVM machine creation complete!
	I1028 12:38:45.263268  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetConfigRaw
	I1028 12:38:45.263912  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .DriverName
	I1028 12:38:45.264128  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .DriverName
	I1028 12:38:45.264313  196623 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1028 12:38:45.264331  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetState
	I1028 12:38:45.265905  196623 main.go:141] libmachine: Detecting operating system of created instance...
	I1028 12:38:45.265920  196623 main.go:141] libmachine: Waiting for SSH to be available...
	I1028 12:38:45.265937  196623 main.go:141] libmachine: Getting to WaitForSSH function...
	I1028 12:38:45.265946  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:45.269380  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.269903  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:45.269932  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.270170  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHPort
	I1028 12:38:45.270367  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.270548  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.270709  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHUsername
	I1028 12:38:45.270901  196623 main.go:141] libmachine: Using SSH client type: native
	I1028 12:38:45.271149  196623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1028 12:38:45.271165  196623 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1028 12:38:45.398293  196623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:38:45.398319  196623 main.go:141] libmachine: Detecting the provisioner...
	I1028 12:38:45.398330  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:45.401719  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.402216  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:45.402269  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.402441  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHPort
	I1028 12:38:45.402684  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.402867  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.403057  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHUsername
	I1028 12:38:45.403267  196623 main.go:141] libmachine: Using SSH client type: native
	I1028 12:38:45.403480  196623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1028 12:38:45.403506  196623 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1028 12:38:45.528085  196623 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1028 12:38:45.528173  196623 main.go:141] libmachine: found compatible host: buildroot
	I1028 12:38:45.528187  196623 main.go:141] libmachine: Provisioning with buildroot...
	I1028 12:38:45.528201  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetMachineName
	I1028 12:38:45.528464  196623 buildroot.go:166] provisioning hostname "custom-flannel-903216"
	I1028 12:38:45.528489  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetMachineName
	I1028 12:38:45.528676  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:45.531639  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.532371  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:45.532402  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.532412  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHPort
	I1028 12:38:45.532664  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.532983  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.533143  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHUsername
	I1028 12:38:45.533344  196623 main.go:141] libmachine: Using SSH client type: native
	I1028 12:38:45.533666  196623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1028 12:38:45.533702  196623 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-903216 && echo "custom-flannel-903216" | sudo tee /etc/hostname
	I1028 12:38:45.671241  196623 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-903216
	
	I1028 12:38:45.671277  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:45.674985  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.706507  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:45.706541  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:45.706694  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHPort
	I1028 12:38:45.706898  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.707120  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:45.707299  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHUsername
	I1028 12:38:45.707490  196623 main.go:141] libmachine: Using SSH client type: native
	I1028 12:38:45.707710  196623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1028 12:38:45.707737  196623 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-903216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-903216/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-903216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:38:45.832690  196623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:38:45.832720  196623 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:38:45.832743  196623 buildroot.go:174] setting up certificates
	I1028 12:38:45.832758  196623 provision.go:84] configureAuth start
	I1028 12:38:45.832770  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetMachineName
	I1028 12:38:45.833086  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetIP
	I1028 12:38:46.252911  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.253305  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:46.253354  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.253504  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:46.256228  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.256691  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:46.256733  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.256952  196623 provision.go:143] copyHostCerts
	I1028 12:38:46.257026  196623 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:38:46.257047  196623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:38:46.257098  196623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:38:46.257220  196623 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:38:46.257231  196623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:38:46.257257  196623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:38:46.257344  196623 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:38:46.257354  196623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:38:46.257388  196623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:38:46.257471  196623 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-903216 san=[127.0.0.1 192.168.39.192 custom-flannel-903216 localhost minikube]
	I1028 12:38:46.414088  196623 provision.go:177] copyRemoteCerts
	I1028 12:38:46.414147  196623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:38:46.414170  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:46.417139  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.417622  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:46.417655  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.417804  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHPort
	I1028 12:38:46.418013  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:46.418184  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHUsername
	I1028 12:38:46.418327  196623 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/custom-flannel-903216/id_rsa Username:docker}
	I1028 12:38:46.505245  196623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:38:46.536135  196623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:38:46.567245  196623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:38:46.598893  196623 provision.go:87] duration metric: took 766.121613ms to configureAuth
	I1028 12:38:46.598927  196623 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:38:46.599093  196623 config.go:182] Loaded profile config "custom-flannel-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:38:46.599162  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHHostname
	I1028 12:38:46.602159  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.602610  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:47:b4", ip: ""} in network mk-custom-flannel-903216: {Iface:virbr3 ExpiryTime:2024-10-28 13:38:38 +0000 UTC Type:0 Mac:52:54:00:87:47:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:custom-flannel-903216 Clientid:01:52:54:00:87:47:b4}
	I1028 12:38:46.602646  196623 main.go:141] libmachine: (custom-flannel-903216) DBG | domain custom-flannel-903216 has defined IP address 192.168.39.192 and MAC address 52:54:00:87:47:b4 in network mk-custom-flannel-903216
	I1028 12:38:46.602877  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHPort
	I1028 12:38:46.603124  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:46.603290  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHKeyPath
	I1028 12:38:46.603494  196623 main.go:141] libmachine: (custom-flannel-903216) Calling .GetSSHUsername
	I1028 12:38:46.603682  196623 main.go:141] libmachine: Using SSH client type: native
	I1028 12:38:46.603925  196623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1028 12:38:46.603955  196623 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:38:46.591867  198445 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:38:46.591935  198445 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:38:46.591952  198445 cache.go:56] Caching tarball of preloaded images
	I1028 12:38:46.592065  198445 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:38:46.592081  198445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:38:46.592236  198445 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/enable-default-cni-903216/config.json ...
	I1028 12:38:46.592261  198445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/enable-default-cni-903216/config.json: {Name:mk9dd4fe6cf583597b7e53661b4f3607899b6a2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:38:46.592441  198445 start.go:360] acquireMachinesLock for enable-default-cni-903216: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:38:47.115417  198445 start.go:364] duration metric: took 522.94737ms to acquireMachinesLock for "enable-default-cni-903216"
	I1028 12:38:47.115497  198445 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-903216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-903216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:38:47.115638  198445 start.go:125] createHost starting for "" (driver="kvm2")
	I1028 12:38:45.359288  196326 addons.go:510] duration metric: took 1.484437508s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1028 12:38:45.374930  196326 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-903216" context rescaled to 1 replicas
	I1028 12:38:46.232509  196326 node_ready.go:53] node "calico-903216" has status "Ready":"False"
	
	
	==> CRI-O <==
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.094306260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119129094204512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af8526d3-8f7e-4d29-95ca-9c61f57641c2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.096507099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fbf51e6-c847-41e3-88f1-587063190764 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.096589668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fbf51e6-c847-41e3-88f1-587063190764 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.096866656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fbf51e6-c847-41e3-88f1-587063190764 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.165686425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a105e05-554d-4912-bd48-e9e8fd21405d name=/runtime.v1.RuntimeService/Version
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.165823784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a105e05-554d-4912-bd48-e9e8fd21405d name=/runtime.v1.RuntimeService/Version
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.167885063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60ff7c31-2b5e-4767-aadb-28f9ab400f6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.168561912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119129168526863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60ff7c31-2b5e-4767-aadb-28f9ab400f6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.169490555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e907818-fb1a-4c47-9ee5-0039113b1e1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.169593679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e907818-fb1a-4c47-9ee5-0039113b1e1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.169876617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e907818-fb1a-4c47-9ee5-0039113b1e1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.238504587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afb4b55a-7348-4a41-8705-96ceee987495 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.238633293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afb4b55a-7348-4a41-8705-96ceee987495 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.240377885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=157e84a5-1aa8-4656-a6b1-fe1df6722174 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.241004707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119129240966484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=157e84a5-1aa8-4656-a6b1-fe1df6722174 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.242460808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a069fc0-3c0a-42f3-9213-c5ea40adeb02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.242571818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a069fc0-3c0a-42f3-9213-c5ea40adeb02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.243168558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a069fc0-3c0a-42f3-9213-c5ea40adeb02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.303856179Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55619dba-1be1-4fe4-95c5-4a57b8793460 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.303977642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55619dba-1be1-4fe4-95c5-4a57b8793460 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.305608548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b8bf90e-ba94-412b-aa4a-e8f919858d3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.306227421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119129306191296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b8bf90e-ba94-412b-aa4a-e8f919858d3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.307056939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4a9e802-c3a7-4f1f-bf25-83b40cfb9444 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.307165470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4a9e802-c3a7-4f1f-bf25-83b40cfb9444 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:38:49 default-k8s-diff-port-349222 crio[712]: time="2024-10-28 12:38:49.307540952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58,PodSandboxId:bb0049c7ac79f5d2502147a6d550a358c0f8048136026a2a3b3014bd0bc903d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085054804530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rxfxk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b917b614-94ef-4c38-a1f4-60422af4bb73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f,PodSandboxId:9dfe25fb53ae1b10df34084a9219acc23912337f8b0b3ead62a6e88eb922ca8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1730118085116079070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkcb7,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 0531b433-940f-4d3d-aae4-9fe5a1b96815,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c,PodSandboxId:84b37f3c41fb7f9fec904ed880d45c56bd5e87aa6cd2924d5f9a0a0994b93a6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1730118085089335766,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b672315-a64e-4222-b07a-3a76050a3b67,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90,PodSandboxId:848af5b289652b60967283f36cc1ede29e347dda0af6d89bc84d91ae7cb4f014,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1730118084906167167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6krbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eab77549-1b29-4a66-b284-d63774357f88,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05,PodSandboxId:959c35f94c2d476ff6502e969d0d43ae9a7c12aef7d9a0a37c15aa00c12219c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1730118072857229176,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43cd23c8951cc85d7333a08820d77e65,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e,PodSandboxId:9472931a10611f84b527697e528fa6a9610c298a9506b5a6d73bd9b67f5a6216,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1730118072836335606,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ed87fb6b1af6953f1209b69f39ac00,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd,PodSandboxId:df35e1501e17f8b045bf2e7151c19852afbd31801f8209648635029ca99f9958,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1730118072816558433,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38,PodSandboxId:99b42080bcf0cd2d9a440698337e234b4b41a7bb1620642ada71a9a2602e33a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1730118072722452677,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160601a4b03eef26d86ee8a233bf746d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487,PodSandboxId:3760c60af964c998070deeb262c8ed9c28d88223e7b274e777709b87ce462898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1730117784307083144,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-349222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3417ce4ccdb3ce86a35beadad64e12,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4a9e802-c3a7-4f1f-bf25-83b40cfb9444 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3a7aceb893fee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   9dfe25fb53ae1       coredns-7c65d6cfc9-nkcb7
	8f42d75c0ae9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   84b37f3c41fb7       storage-provisioner
	f47658c9ee366       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   bb0049c7ac79f       coredns-7c65d6cfc9-rxfxk
	c06cad6ecf391       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 minutes ago      Running             kube-proxy                0                   848af5b289652       kube-proxy-6krbc
	5c2ab4a694be8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   959c35f94c2d4       etcd-default-k8s-diff-port-349222
	6c7c91e017ca1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   17 minutes ago      Running             kube-controller-manager   2                   9472931a10611       kube-controller-manager-default-k8s-diff-port-349222
	a0e1fe9e1548a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   17 minutes ago      Running             kube-apiserver            2                   df35e1501e17f       kube-apiserver-default-k8s-diff-port-349222
	871982dcccfa5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   17 minutes ago      Running             kube-scheduler            2                   99b42080bcf0c       kube-scheduler-default-k8s-diff-port-349222
	558c1f7b76098       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 minutes ago      Exited              kube-apiserver            1                   3760c60af964c       kube-apiserver-default-k8s-diff-port-349222
	
	
	==> coredns [3a7aceb893feef6a9a4ef82208631804f2c088215d949b9b1aa509810bf6204f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f47658c9ee36627d8102825b6cf03ee7dc52c77ac064d635171fe0d63c34be58] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-349222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-349222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
	                    minikube.k8s.io/name=default-k8s-diff-port-349222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 28 Oct 2024 12:21:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-349222
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 28 Oct 2024 12:38:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 28 Oct 2024 12:36:48 +0000   Mon, 28 Oct 2024 12:21:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 28 Oct 2024 12:36:48 +0000   Mon, 28 Oct 2024 12:21:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 28 Oct 2024 12:36:48 +0000   Mon, 28 Oct 2024 12:21:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 28 Oct 2024 12:36:48 +0000   Mon, 28 Oct 2024 12:21:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.75
	  Hostname:    default-k8s-diff-port-349222
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 97b39fb3738145a4a89a71ccc8a6b7ec
	  System UUID:                97b39fb3-7381-45a4-a89a-71ccc8a6b7ec
	  Boot ID:                    3e81d451-65bb-48aa-924b-f60b7c7ff158
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nkcb7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-rxfxk                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-349222                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-349222             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-349222    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-6krbc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-349222             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-4xgsk                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-349222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-349222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-349222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node default-k8s-diff-port-349222 event: Registered Node default-k8s-diff-port-349222 in Controller
	
	
	==> dmesg <==
	[  +0.054637] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046276] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct28 12:16] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.933083] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.644120] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.607521] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.068120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058730] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.219937] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.134028] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.345433] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.621163] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.076382] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.994797] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[  +5.674875] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.847208] kauditd_printk_skb: 85 callbacks suppressed
	[Oct28 12:21] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.916137] systemd-fstab-generator[2599]: Ignoring "noauto" option for root device
	[  +4.455498] kauditd_printk_skb: 58 callbacks suppressed
	[  +2.098111] systemd-fstab-generator[2926]: Ignoring "noauto" option for root device
	[  +4.943388] systemd-fstab-generator[3037]: Ignoring "noauto" option for root device
	[  +0.124254] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.298044] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [5c2ab4a694be89ca408bf1d43aca94766b3056533fde365ec9579c10664f9d05] <==
	{"level":"info","ts":"2024-10-28T12:31:14.258100Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-10-28T12:31:14.267189Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"8.668438ms","hash":3538592047,"current-db-size-bytes":2191360,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2191360,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-28T12:31:14.267307Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3538592047,"revision":685,"compact-revision":-1}
	{"level":"warn","ts":"2024-10-28T12:35:52.267490Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.637059ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:35:52.267640Z","caller":"traceutil/trace.go:171","msg":"trace[124316193] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1153; }","duration":"260.904124ms","start":"2024-10-28T12:35:52.006712Z","end":"2024-10-28T12:35:52.267616Z","steps":["trace[124316193] 'range keys from in-memory index tree'  (duration: 260.505181ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:35:52.654959Z","caller":"traceutil/trace.go:171","msg":"trace[1382150155] transaction","detail":"{read_only:false; response_revision:1154; number_of_response:1; }","duration":"368.021157ms","start":"2024-10-28T12:35:52.286910Z","end":"2024-10-28T12:35:52.654931Z","steps":["trace[1382150155] 'process raft request'  (duration: 367.90589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:35:52.655864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-28T12:35:52.286894Z","time spent":"368.191442ms","remote":"127.0.0.1:35368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1152 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-28T12:36:14.266586Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-10-28T12:36:14.271335Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"4.426836ms","hash":2579143274,"current-db-size-bytes":2191360,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-10-28T12:36:14.271401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2579143274,"revision":927,"compact-revision":685}
	{"level":"info","ts":"2024-10-28T12:36:35.172467Z","caller":"traceutil/trace.go:171","msg":"trace[198825935] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"261.270688ms","start":"2024-10-28T12:36:34.911170Z","end":"2024-10-28T12:36:35.172441Z","steps":["trace[198825935] 'process raft request'  (duration: 261.134885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:36:35.174656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.895195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:36:35.174782Z","caller":"traceutil/trace.go:171","msg":"trace[248026215] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1189; }","duration":"171.037536ms","start":"2024-10-28T12:36:35.003736Z","end":"2024-10-28T12:36:35.174773Z","steps":["trace[248026215] 'agreement among raft nodes before linearized reading'  (duration: 170.859929ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:36:35.174546Z","caller":"traceutil/trace.go:171","msg":"trace[258839859] linearizableReadLoop","detail":"{readStateIndex:1390; appliedIndex:1390; }","duration":"169.636871ms","start":"2024-10-28T12:36:35.003743Z","end":"2024-10-28T12:36:35.173379Z","steps":["trace[258839859] 'read index received'  (duration: 169.631155ms)","trace[258839859] 'applied index is now lower than readState.Index'  (duration: 4.576µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:36:35.175875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.992061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-28T12:36:35.176136Z","caller":"traceutil/trace.go:171","msg":"trace[1017087397] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1189; }","duration":"125.257508ms","start":"2024-10-28T12:36:35.050860Z","end":"2024-10-28T12:36:35.176117Z","steps":["trace[1017087397] 'agreement among raft nodes before linearized reading'  (duration: 124.735669ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:37:43.151341Z","caller":"traceutil/trace.go:171","msg":"trace[1567819037] linearizableReadLoop","detail":"{readStateIndex:1458; appliedIndex:1457; }","duration":"150.463664ms","start":"2024-10-28T12:37:43.000752Z","end":"2024-10-28T12:37:43.151215Z","steps":["trace[1567819037] 'read index received'  (duration: 66.656492ms)","trace[1567819037] 'applied index is now lower than readState.Index'  (duration: 83.806089ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:37:43.151640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.765953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-28T12:37:43.151925Z","caller":"traceutil/trace.go:171","msg":"trace[1227437635] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1243; }","duration":"151.140112ms","start":"2024-10-28T12:37:43.000747Z","end":"2024-10-28T12:37:43.151887Z","steps":["trace[1227437635] 'agreement among raft nodes before linearized reading'  (duration: 150.734178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-28T12:38:28.049475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.407892ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641642907173615863 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.75\" mod_revision:1274 > success:<request_put:<key:\"/registry/masterleases/192.168.50.75\" value_size:66 lease:641642907173615861 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.75\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-28T12:38:28.049713Z","caller":"traceutil/trace.go:171","msg":"trace[1920841630] linearizableReadLoop","detail":"{readStateIndex:1506; appliedIndex:1505; }","duration":"135.761183ms","start":"2024-10-28T12:38:27.913938Z","end":"2024-10-28T12:38:28.049699Z","steps":["trace[1920841630] 'read index received'  (duration: 7.515941ms)","trace[1920841630] 'applied index is now lower than readState.Index'  (duration: 128.243991ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-28T12:38:28.049807Z","caller":"traceutil/trace.go:171","msg":"trace[1422413961] transaction","detail":"{read_only:false; response_revision:1282; number_of_response:1; }","duration":"188.341663ms","start":"2024-10-28T12:38:27.861443Z","end":"2024-10-28T12:38:28.049785Z","steps":["trace[1422413961] 'process raft request'  (duration: 60.060121ms)","trace[1422413961] 'compare'  (duration: 127.239893ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-28T12:38:28.049955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.009719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-10-28T12:38:28.050011Z","caller":"traceutil/trace.go:171","msg":"trace[433931023] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1282; }","duration":"136.068133ms","start":"2024-10-28T12:38:27.913933Z","end":"2024-10-28T12:38:28.050001Z","steps":["trace[433931023] 'agreement among raft nodes before linearized reading'  (duration: 135.878294ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-28T12:38:28.257945Z","caller":"traceutil/trace.go:171","msg":"trace[1379979197] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"201.841132ms","start":"2024-10-28T12:38:28.056087Z","end":"2024-10-28T12:38:28.257928Z","steps":["trace[1379979197] 'process raft request'  (duration: 141.341791ms)","trace[1379979197] 'compare'  (duration: 60.291192ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:38:49 up 22 min,  0 users,  load average: 0.09, 0.15, 0.12
	Linux default-k8s-diff-port-349222 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [558c1f7b76098eb3a02c6443ef714d0502a54e1a2b4d6cbd7c3f4c27cd4a3487] <==
	W1028 12:21:04.690925       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.691177       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.736979       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.749611       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.769659       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.825014       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.832723       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.886964       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.949711       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:04.977736       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.044495       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.060208       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.146554       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.286592       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.352424       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.569913       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:05.708692       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:08.465637       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:08.748326       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.042483       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.138741       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.431553       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.477715       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.506940       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1028 12:21:09.609540       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0e1fe9e1548a006faa090e74eefb7853d8cf98dcacfc8cdf1ac20ff5bc126bd] <==
	I1028 12:34:17.048854       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:34:17.048924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:36:16.045309       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:36:16.045724       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1028 12:36:17.048349       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:36:17.048414       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1028 12:36:17.048662       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:36:17.048875       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:36:17.049604       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1028 12:36:17.050748       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:37:17.050530       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:37:17.050672       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1028 12:37:17.051827       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1028 12:37:17.052036       1 handler_proxy.go:99] no RequestInfo found in the context
	E1028 12:37:17.052091       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1028 12:37:17.053389       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6c7c91e017ca1cc1abb23095b44ab6dc8e81f352f86a4026ae02897e6154155e] <==
	E1028 12:33:23.161285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:33:23.685946       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:33:53.168138       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:33:53.698966       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:34:23.175603       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:34:23.708839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:34:53.183724       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:34:53.717733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:35:23.191189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:35:23.727345       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:35:53.199553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:35:53.749967       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:36:23.206799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:36:23.758859       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:36:48.472351       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-349222"
	E1028 12:36:53.218034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:36:53.769149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1028 12:37:23.225084       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:37:23.779507       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:37:48.624787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="460.43µs"
	E1028 12:37:53.231584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:37:53.792555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1028 12:38:03.617759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="96.712µs"
	E1028 12:38:23.238462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1028 12:38:23.803714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c06cad6ecf39169ea2349f0bcbf76e82623487a4581111d5e535dd4bbdb25c90] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1028 12:21:25.605114       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1028 12:21:25.614635       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.75"]
	E1028 12:21:25.614731       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1028 12:21:25.651647       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1028 12:21:25.651699       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1028 12:21:25.651732       1 server_linux.go:169] "Using iptables Proxier"
	I1028 12:21:25.654413       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1028 12:21:25.654732       1 server.go:483] "Version info" version="v1.31.2"
	I1028 12:21:25.654759       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1028 12:21:25.656192       1 config.go:199] "Starting service config controller"
	I1028 12:21:25.656227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1028 12:21:25.656319       1 config.go:105] "Starting endpoint slice config controller"
	I1028 12:21:25.656341       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1028 12:21:25.658899       1 config.go:328] "Starting node config controller"
	I1028 12:21:25.658975       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1028 12:21:25.757007       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1028 12:21:25.757332       1 shared_informer.go:320] Caches are synced for service config
	I1028 12:21:25.759027       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [871982dcccfa5cec8ebf1d39a32e0781bfccbcbcf866a64834c402d7a3c9bf38] <==
	W1028 12:21:16.966122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1028 12:21:16.966304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.016952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1028 12:21:17.017006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.019338       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1028 12:21:17.019386       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1028 12:21:17.024030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.024164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.134329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.134463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.140161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1028 12:21:17.140412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.154720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1028 12:21:17.154775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.198423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1028 12:21:17.198492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.206440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1028 12:21:17.206495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.246340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.246456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.368346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1028 12:21:17.368395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1028 12:21:17.377145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1028 12:21:17.377215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1028 12:21:20.266218       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 28 12:37:38 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:37:38.975453    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119058974829060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:37:48.601615    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:37:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:37:48.977871    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119068977383051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:37:48.977914    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119068977383051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:58 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:37:58.979858    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119078979420000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:37:58 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:37:58.980469    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119078979420000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:03 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:03.601487    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:38:08 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:08.983223    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119088982727158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:08 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:08.983314    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119088982727158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:14 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:14.602145    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:18.637771    2933 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:18.985995    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119098985556948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:18 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:18.986053    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119098985556948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:27 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:27.601750    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:38:28 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:28.988433    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119108987803362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:28 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:28.988497    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119108987803362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:38 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:38.991354    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119118990782841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:38 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:38.991429    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119118990782841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:42 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:42.602639    2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4xgsk" podUID="d9428c22-0c65-4809-a647-8a4c3737f67d"
	Oct 28 12:38:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:48.993708    2933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119128993218192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 28 12:38:48 default-k8s-diff-port-349222 kubelet[2933]: E1028 12:38:48.993768    2933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730119128993218192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8f42d75c0ae9ce08ca47da9c16e732f8ba971e6941e3e3ef2c1f8cbc481f663c] <==
	I1028 12:21:25.319679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1028 12:21:25.464715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1028 12:21:25.464760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1028 12:21:25.498933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1028 12:21:25.499105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-349222_a78ad73a-4d1f-4a3a-b56d-98d17bafc5cc!
	I1028 12:21:25.507084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7e1418f-921a-4177-89d4-79db96a98cb8", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-349222_a78ad73a-4d1f-4a3a-b56d-98d17bafc5cc became leader
	I1028 12:21:25.599565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-349222_a78ad73a-4d1f-4a3a-b56d-98d17bafc5cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4xgsk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 describe pod metrics-server-6867b74b74-4xgsk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349222 describe pod metrics-server-6867b74b74-4xgsk: exit status 1 (100.853091ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4xgsk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-349222 describe pod metrics-server-6867b74b74-4xgsk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (493.12s)
E1028 12:40:09.886485  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1028 12:35:09.886840  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (246.581304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-089993" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-089993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-089993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.065µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-089993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (234.716545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-089993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-089993 logs -n 25: (1.550075s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-601400                              | cert-expiration-601400       | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:07 UTC |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:07 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-871884             | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-337849                           | kubernetes-upgrade-337849    | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-219559 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:08 UTC |
	|         | disable-driver-mounts-219559                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:08 UTC | 28 Oct 24 12:10 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-709250            | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC | 28 Oct 24 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-089993        | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-871884                  | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-349222  | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC |                     |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p no-preload-871884                                   | no-preload-871884            | jenkins | v1.34.0 | 28 Oct 24 12:10 UTC | 28 Oct 24 12:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-709250                 | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-709250                                  | embed-certs-709250           | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-089993             | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC | 28 Oct 24 12:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-089993                              | old-k8s-version-089993       | jenkins | v1.34.0 | 28 Oct 24 12:11 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-349222       | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-349222 | jenkins | v1.34.0 | 28 Oct 24 12:13 UTC | 28 Oct 24 12:21 UTC |
	|         | default-k8s-diff-port-349222                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 12:13:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 12:13:02.452508  186547 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:13:02.452621  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452630  186547 out.go:358] Setting ErrFile to fd 2...
	I1028 12:13:02.452635  186547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:13:02.452828  186547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:13:02.453378  186547 out.go:352] Setting JSON to false
	I1028 12:13:02.454320  186547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6925,"bootTime":1730110657,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:13:02.454420  186547 start.go:139] virtualization: kvm guest
	I1028 12:13:02.456754  186547 out.go:177] * [default-k8s-diff-port-349222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:13:02.458343  186547 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:13:02.458413  186547 notify.go:220] Checking for updates...
	I1028 12:13:02.460946  186547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:13:02.462089  186547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:13:02.463460  186547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:13:02.464649  186547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:13:02.466107  186547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:13:02.468142  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:13:02.468518  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.468587  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.483793  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I1028 12:13:02.484273  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.484861  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.484884  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.485260  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.485471  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.485712  186547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:13:02.485997  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:13:02.486030  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:13:02.501110  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1028 12:13:02.501722  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:13:02.502335  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:13:02.502362  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:13:02.502682  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:13:02.502878  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:13:02.539766  186547 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 12:13:02.541024  186547 start.go:297] selected driver: kvm2
	I1028 12:13:02.541038  186547 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.541168  186547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:13:02.541929  186547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.542014  186547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 12:13:02.557443  186547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 12:13:02.557868  186547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:13:02.557902  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:13:02.557947  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:13:02.557987  186547 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:13:02.558098  186547 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 12:13:02.560651  186547 out.go:177] * Starting "default-k8s-diff-port-349222" primary control-plane node in "default-k8s-diff-port-349222" cluster
	I1028 12:13:02.693744  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:02.561767  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:13:02.561800  186547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 12:13:02.561810  186547 cache.go:56] Caching tarball of preloaded images
	I1028 12:13:02.561877  186547 preload.go:172] Found /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1028 12:13:02.561887  186547 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1028 12:13:02.561973  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:13:02.562165  186547 start.go:360] acquireMachinesLock for default-k8s-diff-port-349222: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:13:08.773770  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:11.845825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:17.925957  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:20.997733  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:27.077858  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:30.149737  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:36.229851  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:39.301764  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:45.381781  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:48.453767  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:54.533793  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:13:57.605754  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:03.685848  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:06.757874  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:12.837829  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:15.909778  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:21.989850  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:25.061812  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:31.141825  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:34.213757  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:40.293844  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:43.365865  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:49.445872  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:52.517750  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:14:58.597834  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:01.669837  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:07.749853  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:10.821838  185546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.156:22: connect: no route to host
	I1028 12:15:13.826298  185942 start.go:364] duration metric: took 3m37.788021766s to acquireMachinesLock for "embed-certs-709250"
	I1028 12:15:13.826369  185942 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:13.826382  185942 fix.go:54] fixHost starting: 
	I1028 12:15:13.827047  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:13.827113  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:13.842889  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1028 12:15:13.843403  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:13.843915  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:15:13.843938  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:13.844374  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:13.844568  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:13.844733  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:15:13.846440  185942 fix.go:112] recreateIfNeeded on embed-certs-709250: state=Stopped err=<nil>
	I1028 12:15:13.846464  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	W1028 12:15:13.846629  185942 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:13.848878  185942 out.go:177] * Restarting existing kvm2 VM for "embed-certs-709250" ...
	I1028 12:15:13.850607  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Start
	I1028 12:15:13.850800  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring networks are active...
	I1028 12:15:13.851930  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network default is active
	I1028 12:15:13.852331  185942 main.go:141] libmachine: (embed-certs-709250) Ensuring network mk-embed-certs-709250 is active
	I1028 12:15:13.852652  185942 main.go:141] libmachine: (embed-certs-709250) Getting domain xml...
	I1028 12:15:13.853394  185942 main.go:141] libmachine: (embed-certs-709250) Creating domain...
	I1028 12:15:15.098667  185942 main.go:141] libmachine: (embed-certs-709250) Waiting to get IP...
	I1028 12:15:15.099525  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.099919  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.099951  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.099877  187018 retry.go:31] will retry after 285.25732ms: waiting for machine to come up
	I1028 12:15:15.386531  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.386992  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.387023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.386921  187018 retry.go:31] will retry after 327.08041ms: waiting for machine to come up
	I1028 12:15:15.715435  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:15.715900  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:15.715928  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:15.715846  187018 retry.go:31] will retry after 443.083162ms: waiting for machine to come up
	I1028 12:15:13.823652  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:13.823724  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824056  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:15:13.824085  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:15:13.824284  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:15:13.826158  185546 machine.go:96] duration metric: took 4m37.413470632s to provisionDockerMachine
	I1028 12:15:13.826202  185546 fix.go:56] duration metric: took 4m37.436313043s for fixHost
	I1028 12:15:13.826208  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 4m37.436350273s
	W1028 12:15:13.826226  185546 start.go:714] error starting host: provision: host is not running
	W1028 12:15:13.826336  185546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1028 12:15:13.826346  185546 start.go:729] Will try again in 5 seconds ...
	I1028 12:15:16.160595  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.161024  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.161045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.161003  187018 retry.go:31] will retry after 599.535995ms: waiting for machine to come up
	I1028 12:15:16.761771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:16.762167  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:16.762213  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:16.762114  187018 retry.go:31] will retry after 527.275961ms: waiting for machine to come up
	I1028 12:15:17.290788  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:17.291124  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:17.291145  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:17.291098  187018 retry.go:31] will retry after 858.175967ms: waiting for machine to come up
	I1028 12:15:18.150644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.151045  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.151071  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.150996  187018 retry.go:31] will retry after 727.962346ms: waiting for machine to come up
	I1028 12:15:18.880545  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:18.880990  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:18.881020  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:18.880942  187018 retry.go:31] will retry after 1.184956373s: waiting for machine to come up
	I1028 12:15:20.067178  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:20.067603  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:20.067635  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:20.067553  187018 retry.go:31] will retry after 1.635056202s: waiting for machine to come up
	I1028 12:15:18.827987  185546 start.go:360] acquireMachinesLock for no-preload-871884: {Name:mk3750a0bdd2e97c7159620fd9d743a5396606c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1028 12:15:21.703941  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:21.704338  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:21.704365  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:21.704302  187018 retry.go:31] will retry after 1.865473383s: waiting for machine to come up
	I1028 12:15:23.572362  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:23.572816  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:23.572843  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:23.572773  187018 retry.go:31] will retry after 2.604970031s: waiting for machine to come up
	I1028 12:15:26.181289  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:26.181849  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:26.181880  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:26.181788  187018 retry.go:31] will retry after 2.866004055s: waiting for machine to come up
	I1028 12:15:29.049604  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:29.050029  185942 main.go:141] libmachine: (embed-certs-709250) DBG | unable to find current IP address of domain embed-certs-709250 in network mk-embed-certs-709250
	I1028 12:15:29.050068  185942 main.go:141] libmachine: (embed-certs-709250) DBG | I1028 12:15:29.049970  187018 retry.go:31] will retry after 3.046879869s: waiting for machine to come up
	I1028 12:15:33.350844  186170 start.go:364] duration metric: took 3m34.924904114s to acquireMachinesLock for "old-k8s-version-089993"
	I1028 12:15:33.350912  186170 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:33.350923  186170 fix.go:54] fixHost starting: 
	I1028 12:15:33.351392  186170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:33.351440  186170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:33.368339  186170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1028 12:15:33.368805  186170 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:33.369418  186170 main.go:141] libmachine: Using API Version  1
	I1028 12:15:33.369439  186170 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:33.369784  186170 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:33.369969  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:33.370125  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetState
	I1028 12:15:33.371873  186170 fix.go:112] recreateIfNeeded on old-k8s-version-089993: state=Stopped err=<nil>
	I1028 12:15:33.371908  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	W1028 12:15:33.372086  186170 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:33.374289  186170 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-089993" ...
	I1028 12:15:32.100252  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.100812  185942 main.go:141] libmachine: (embed-certs-709250) Found IP for machine: 192.168.39.211
	I1028 12:15:32.100831  185942 main.go:141] libmachine: (embed-certs-709250) Reserving static IP address...
	I1028 12:15:32.100842  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has current primary IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.101552  185942 main.go:141] libmachine: (embed-certs-709250) Reserved static IP address: 192.168.39.211
	I1028 12:15:32.101568  185942 main.go:141] libmachine: (embed-certs-709250) Waiting for SSH to be available...
	I1028 12:15:32.101602  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.101629  185942 main.go:141] libmachine: (embed-certs-709250) DBG | skip adding static IP to network mk-embed-certs-709250 - found existing host DHCP lease matching {name: "embed-certs-709250", mac: "52:54:00:39:3b:0d", ip: "192.168.39.211"}
	I1028 12:15:32.101644  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Getting to WaitForSSH function...
	I1028 12:15:32.104041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.104356  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.104459  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH client type: external
	I1028 12:15:32.104488  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa (-rw-------)
	I1028 12:15:32.104519  185942 main.go:141] libmachine: (embed-certs-709250) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:32.104530  185942 main.go:141] libmachine: (embed-certs-709250) DBG | About to run SSH command:
	I1028 12:15:32.104538  185942 main.go:141] libmachine: (embed-certs-709250) DBG | exit 0
	I1028 12:15:32.233966  185942 main.go:141] libmachine: (embed-certs-709250) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:32.234363  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetConfigRaw
	I1028 12:15:32.235010  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.237443  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.237755  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.237783  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.238040  185942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/config.json ...
	I1028 12:15:32.238272  185942 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:32.238291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:32.238541  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.240765  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241165  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.241212  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.241333  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.241520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241704  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.241836  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.241989  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.242234  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.242247  185942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:32.358412  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:32.358443  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.358773  185942 buildroot.go:166] provisioning hostname "embed-certs-709250"
	I1028 12:15:32.358810  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.359027  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.361776  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362122  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.362161  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.362262  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.362429  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362579  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.362709  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.362867  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.363084  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.363098  185942 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-709250 && echo "embed-certs-709250" | sudo tee /etc/hostname
	I1028 12:15:32.492437  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-709250
	
	I1028 12:15:32.492466  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.495108  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495394  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.495438  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.495586  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.495771  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.495927  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.496054  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.496215  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.496399  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.496416  185942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-709250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-709250/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-709250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:32.619038  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:32.619074  185942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:32.619113  185942 buildroot.go:174] setting up certificates
	I1028 12:15:32.619125  185942 provision.go:84] configureAuth start
	I1028 12:15:32.619137  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetMachineName
	I1028 12:15:32.619451  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:32.622055  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622448  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.622479  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.622593  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.624610  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625037  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.625066  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.625086  185942 provision.go:143] copyHostCerts
	I1028 12:15:32.625174  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:32.625190  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:32.625259  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:32.625396  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:32.625407  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:32.625444  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:32.625519  185942 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:32.625541  185942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:32.625575  185942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:32.625645  185942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.embed-certs-709250 san=[127.0.0.1 192.168.39.211 embed-certs-709250 localhost minikube]
	I1028 12:15:32.684483  185942 provision.go:177] copyRemoteCerts
	I1028 12:15:32.684547  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:32.684576  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.686926  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687244  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.687284  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.687427  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.687617  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.687744  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.687890  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:32.776282  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:32.802180  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1028 12:15:32.829609  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:32.854274  185942 provision.go:87] duration metric: took 235.133526ms to configureAuth
	I1028 12:15:32.854305  185942 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:32.854474  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:15:32.854547  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:32.857363  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.857736  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:32.857771  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:32.858038  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:32.858251  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858442  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:32.858652  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:32.858809  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:32.858979  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:32.858996  185942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:33.101841  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:33.101870  185942 machine.go:96] duration metric: took 863.584969ms to provisionDockerMachine
	I1028 12:15:33.101883  185942 start.go:293] postStartSetup for "embed-certs-709250" (driver="kvm2")
	I1028 12:15:33.101896  185942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:33.101911  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.102249  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:33.102285  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.105023  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105327  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.105357  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.105493  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.105710  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.105881  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.106032  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.193225  185942 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:33.197548  185942 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:33.197570  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:33.197637  185942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:33.197739  185942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:33.197861  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:33.207962  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:33.231808  185942 start.go:296] duration metric: took 129.908529ms for postStartSetup
	I1028 12:15:33.231853  185942 fix.go:56] duration metric: took 19.405472942s for fixHost
	I1028 12:15:33.231875  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.234609  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.234943  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.234979  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.235167  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.235370  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235520  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.235642  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.235806  185942 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:33.236026  185942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1028 12:15:33.236041  185942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:33.350639  185942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117733.322211717
	
	I1028 12:15:33.350663  185942 fix.go:216] guest clock: 1730117733.322211717
	I1028 12:15:33.350673  185942 fix.go:229] Guest: 2024-10-28 12:15:33.322211717 +0000 UTC Remote: 2024-10-28 12:15:33.231858201 +0000 UTC m=+237.345598419 (delta=90.353516ms)
	I1028 12:15:33.350707  185942 fix.go:200] guest clock delta is within tolerance: 90.353516ms
	I1028 12:15:33.350714  185942 start.go:83] releasing machines lock for "embed-certs-709250", held for 19.524379046s
	I1028 12:15:33.350737  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.350974  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:33.353647  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354012  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.354041  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.354244  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354690  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354873  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:15:33.354973  185942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:33.355017  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.355090  185942 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:33.355116  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:15:33.357679  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358050  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358074  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358242  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358389  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.358542  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.358584  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:33.358616  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:33.358681  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.358721  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:15:33.358892  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:15:33.359048  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:15:33.359197  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:15:33.443468  185942 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:33.498501  185942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:33.642221  185942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:33.649269  185942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:33.649336  185942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:33.665990  185942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:33.666023  185942 start.go:495] detecting cgroup driver to use...
	I1028 12:15:33.666103  185942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:33.683188  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:33.699441  185942 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:33.699506  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:33.714192  185942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:33.728325  185942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:33.850801  185942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:34.028929  185942 docker.go:233] disabling docker service ...
	I1028 12:15:34.028991  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:34.045600  185942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:34.059450  185942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:34.182310  185942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:34.305346  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:34.322354  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:34.342738  185942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:15:34.342804  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.354622  185942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:34.354687  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.365663  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.376503  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.388360  185942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:34.399960  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.419087  185942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.439700  185942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:34.451425  185942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:34.461657  185942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:34.461710  185942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:34.476292  185942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:34.487186  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:34.614984  185942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:34.709983  185942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:34.710061  185942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:34.715204  185942 start.go:563] Will wait 60s for crictl version
	I1028 12:15:34.715268  185942 ssh_runner.go:195] Run: which crictl
	I1028 12:15:34.719459  185942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:34.760330  185942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:34.760407  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.788635  185942 ssh_runner.go:195] Run: crio --version
	I1028 12:15:34.820113  185942 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:15:34.821282  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetIP
	I1028 12:15:34.824384  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.824719  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:15:34.824746  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:15:34.825032  185942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:34.829502  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:34.842695  185942 kubeadm.go:883] updating cluster {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:34.842845  185942 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:15:34.842897  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:34.881154  185942 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:15:34.881218  185942 ssh_runner.go:195] Run: which lz4
	I1028 12:15:34.885630  185942 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:34.890045  185942 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:34.890075  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:15:33.375597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .Start
	I1028 12:15:33.375787  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring networks are active...
	I1028 12:15:33.376736  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network default is active
	I1028 12:15:33.377208  186170 main.go:141] libmachine: (old-k8s-version-089993) Ensuring network mk-old-k8s-version-089993 is active
	I1028 12:15:33.377706  186170 main.go:141] libmachine: (old-k8s-version-089993) Getting domain xml...
	I1028 12:15:33.378449  186170 main.go:141] libmachine: (old-k8s-version-089993) Creating domain...
	I1028 12:15:34.645925  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting to get IP...
	I1028 12:15:34.646739  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.647234  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.647347  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.647218  187153 retry.go:31] will retry after 292.558863ms: waiting for machine to come up
	I1028 12:15:34.941609  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:34.942074  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:34.942102  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:34.942024  187153 retry.go:31] will retry after 331.872118ms: waiting for machine to come up
	I1028 12:15:35.275748  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.276283  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.276318  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.276244  187153 retry.go:31] will retry after 427.829102ms: waiting for machine to come up
	I1028 12:15:35.705935  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:35.706409  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:35.706438  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:35.706367  187153 retry.go:31] will retry after 371.58196ms: waiting for machine to come up
	I1028 12:15:36.079879  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.080445  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.080469  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.080392  187153 retry.go:31] will retry after 504.323728ms: waiting for machine to come up
	I1028 12:15:36.585967  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:36.586405  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:36.586436  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:36.586346  187153 retry.go:31] will retry after 676.776678ms: waiting for machine to come up
	I1028 12:15:37.265499  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:37.266087  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:37.266114  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:37.266037  187153 retry.go:31] will retry after 1.178891662s: waiting for machine to come up
	I1028 12:15:36.448704  185942 crio.go:462] duration metric: took 1.563096609s to copy over tarball
	I1028 12:15:36.448792  185942 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:38.703177  185942 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25435315s)
	I1028 12:15:38.703207  185942 crio.go:469] duration metric: took 2.254465841s to extract the tarball
	I1028 12:15:38.703217  185942 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:38.741005  185942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:38.788350  185942 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:15:38.788376  185942 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:15:38.788383  185942 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1028 12:15:38.788491  185942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-709250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:15:38.788558  185942 ssh_runner.go:195] Run: crio config
	I1028 12:15:38.835642  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:38.835667  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:38.835678  185942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:15:38.835701  185942 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-709250 NodeName:embed-certs-709250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:15:38.835822  185942 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-709250"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:15:38.835879  185942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:15:38.846832  185942 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:15:38.846925  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:15:38.857103  185942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1028 12:15:38.874531  185942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:15:38.892213  185942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1028 12:15:38.910949  185942 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1028 12:15:38.915391  185942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:38.928874  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:39.045969  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:15:39.063425  185942 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250 for IP: 192.168.39.211
	I1028 12:15:39.063449  185942 certs.go:194] generating shared ca certs ...
	I1028 12:15:39.063465  185942 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:15:39.063638  185942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:15:39.063693  185942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:15:39.063709  185942 certs.go:256] generating profile certs ...
	I1028 12:15:39.063810  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/client.key
	I1028 12:15:39.063893  185942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key.20eef9ce
	I1028 12:15:39.063951  185942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key
	I1028 12:15:39.064107  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:15:39.064153  185942 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:15:39.064167  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:15:39.064202  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:15:39.064239  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:15:39.064272  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:15:39.064335  185942 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:39.064972  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:15:39.103261  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:15:39.145102  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:15:39.175151  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:15:39.205220  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1028 12:15:39.236045  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:15:39.273622  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:15:39.299336  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/embed-certs-709250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:15:39.325277  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:15:39.349878  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:15:39.374466  185942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:15:39.398920  185942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:15:39.416280  185942 ssh_runner.go:195] Run: openssl version
	I1028 12:15:39.422478  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:15:39.434671  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439581  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.439635  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:15:39.445736  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:15:39.457128  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:15:39.468602  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473229  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.473306  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:15:39.479063  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:15:39.490370  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:15:39.501843  185942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506514  185942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.506579  185942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:15:39.512633  185942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:15:39.524115  185942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:15:39.528804  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:15:39.534982  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:15:39.541214  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:15:39.547734  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:15:39.554143  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:15:39.560719  185942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:15:39.567076  185942 kubeadm.go:392] StartCluster: {Name:embed-certs-709250 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-709250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:15:39.567173  185942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:15:39.567226  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.611567  185942 cri.go:89] found id: ""
	I1028 12:15:39.611644  185942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:15:39.622561  185942 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:15:39.622583  185942 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:15:39.622637  185942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:15:39.632757  185942 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:15:39.633873  185942 kubeconfig.go:125] found "embed-certs-709250" server: "https://192.168.39.211:8443"
	I1028 12:15:39.635943  185942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:15:39.646060  185942 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1028 12:15:39.646104  185942 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:15:39.646119  185942 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:15:39.646177  185942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:15:39.686806  185942 cri.go:89] found id: ""
	I1028 12:15:39.686891  185942 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:15:39.703935  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:15:39.714319  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:15:39.714341  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:15:39.714389  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:15:39.725383  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:15:39.725452  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:15:39.737075  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:15:39.748226  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:15:39.748311  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:15:39.760111  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.770287  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:15:39.770365  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:15:39.780776  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:15:39.790412  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:15:39.790475  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:15:39.800727  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:15:39.811331  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:39.926791  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:38.446927  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:38.447488  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:38.447518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:38.447431  187153 retry.go:31] will retry after 1.170920623s: waiting for machine to come up
	I1028 12:15:39.619731  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:39.620169  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:39.620198  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:39.620119  187153 retry.go:31] will retry after 1.49376255s: waiting for machine to come up
	I1028 12:15:41.115247  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:41.115785  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:41.115815  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:41.115737  187153 retry.go:31] will retry after 2.161966931s: waiting for machine to come up
	I1028 12:15:43.280454  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:43.280989  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:43.281026  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:43.280932  187153 retry.go:31] will retry after 2.179284899s: waiting for machine to come up
	I1028 12:15:41.043020  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11617977s)
	I1028 12:15:41.043082  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.246311  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.309073  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:41.392313  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:15:41.392425  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:41.893601  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.393518  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:15:42.444753  185942 api_server.go:72] duration metric: took 1.052438751s to wait for apiserver process to appear ...
	I1028 12:15:42.444794  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:15:42.444821  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.214786  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.214821  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.214837  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.252422  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:15:45.252458  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:15:45.445825  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.451454  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.451549  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:45.945668  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:45.956623  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:45.956667  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.445240  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.450197  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:15:46.450223  185942 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:15:46.945901  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:15:46.950302  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:15:46.956218  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:15:46.956245  185942 api_server.go:131] duration metric: took 4.511443878s to wait for apiserver health ...
	I1028 12:15:46.956254  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:15:46.956260  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:15:46.958294  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:15:45.462983  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:45.463534  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:45.463560  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:45.463491  187153 retry.go:31] will retry after 2.2623086s: waiting for machine to come up
	I1028 12:15:47.728769  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:47.729277  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | unable to find current IP address of domain old-k8s-version-089993 in network mk-old-k8s-version-089993
	I1028 12:15:47.729332  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | I1028 12:15:47.729241  187153 retry.go:31] will retry after 4.393695309s: waiting for machine to come up
	I1028 12:15:46.959738  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:15:46.970473  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:15:46.994129  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:15:47.003807  185942 system_pods.go:59] 8 kube-system pods found
	I1028 12:15:47.003843  185942 system_pods.go:61] "coredns-7c65d6cfc9-j66cd" [d53b2839-00f6-4ccc-833d-76424b3efdba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:15:47.003851  185942 system_pods.go:61] "etcd-embed-certs-709250" [24761127-dde4-4f5d-b7cf-a13e37366e0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:15:47.003858  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [17996153-32c3-41e0-be90-fc9e058e0080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:15:47.003864  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [4ce37c00-1015-45f8-b847-1ca92cdf3a31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:15:47.003871  185942 system_pods.go:61] "kube-proxy-dl7xq" [a06ed5ff-b1e9-42c7-ba26-f120bb03ccb6] Running
	I1028 12:15:47.003877  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [c76e654e-a7fc-4891-8e73-bd74f9178c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:15:47.003883  185942 system_pods.go:61] "metrics-server-6867b74b74-k69kz" [568d5308-3f66-459b-b5c8-594d9400b6c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:15:47.003886  185942 system_pods.go:61] "storage-provisioner" [6552cef1-21b6-4306-a3e2-ff16793257dc] Running
	I1028 12:15:47.003893  185942 system_pods.go:74] duration metric: took 9.734271ms to wait for pod list to return data ...
	I1028 12:15:47.003900  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:15:47.008428  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:15:47.008465  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:15:47.008479  185942 node_conditions.go:105] duration metric: took 4.573275ms to run NodePressure ...
	I1028 12:15:47.008504  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:15:47.285509  185942 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291045  185942 kubeadm.go:739] kubelet initialised
	I1028 12:15:47.291069  185942 kubeadm.go:740] duration metric: took 5.521713ms waiting for restarted kubelet to initialise ...
	I1028 12:15:47.291078  185942 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:15:47.299072  185942 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:49.312365  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:50.804953  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:50.804976  185942 pod_ready.go:82] duration metric: took 3.505873868s for pod "coredns-7c65d6cfc9-j66cd" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:50.804986  185942 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:52.126559  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126960  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has current primary IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.126988  186170 main.go:141] libmachine: (old-k8s-version-089993) Found IP for machine: 192.168.61.119
	I1028 12:15:52.127021  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserving static IP address...
	I1028 12:15:52.127441  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.127474  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | skip adding static IP to network mk-old-k8s-version-089993 - found existing host DHCP lease matching {name: "old-k8s-version-089993", mac: "52:54:00:50:95:38", ip: "192.168.61.119"}
	I1028 12:15:52.127486  186170 main.go:141] libmachine: (old-k8s-version-089993) Reserved static IP address: 192.168.61.119
	I1028 12:15:52.127498  186170 main.go:141] libmachine: (old-k8s-version-089993) Waiting for SSH to be available...
	I1028 12:15:52.127551  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Getting to WaitForSSH function...
	I1028 12:15:52.129970  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130313  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.130349  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.130518  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH client type: external
	I1028 12:15:52.130540  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa (-rw-------)
	I1028 12:15:52.130565  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:15:52.130578  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | About to run SSH command:
	I1028 12:15:52.130593  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | exit 0
	I1028 12:15:52.253686  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | SSH cmd err, output: <nil>: 
	I1028 12:15:52.254051  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetConfigRaw
	I1028 12:15:52.254719  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.257217  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257692  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.257719  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.257996  186170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/config.json ...
	I1028 12:15:52.258203  186170 machine.go:93] provisionDockerMachine start ...
	I1028 12:15:52.258222  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:52.258456  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.260665  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.260972  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.261012  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.261139  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.261295  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261451  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.261632  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.261786  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.261968  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.261979  186170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:15:52.362092  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:15:52.362129  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362362  186170 buildroot.go:166] provisioning hostname "old-k8s-version-089993"
	I1028 12:15:52.362386  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.362588  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.365124  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.365489  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.365598  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.365768  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.365924  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.366060  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.366238  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.366424  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.366441  186170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-089993 && echo "old-k8s-version-089993" | sudo tee /etc/hostname
	I1028 12:15:52.485032  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-089993
	
	I1028 12:15:52.485069  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.487733  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488095  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.488129  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.488270  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.488458  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488597  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.488724  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.488872  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.489063  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.489079  186170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-089993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-089993/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-089993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:15:52.599940  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:15:52.599975  186170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:15:52.600009  186170 buildroot.go:174] setting up certificates
	I1028 12:15:52.600019  186170 provision.go:84] configureAuth start
	I1028 12:15:52.600028  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetMachineName
	I1028 12:15:52.600319  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:52.603047  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603357  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.603385  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.603536  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.605827  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606164  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.606190  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.606334  186170 provision.go:143] copyHostCerts
	I1028 12:15:52.606414  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:15:52.606429  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:15:52.606500  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:15:52.606650  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:15:52.606661  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:15:52.606693  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:15:52.606795  186170 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:15:52.606805  186170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:15:52.606834  186170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:15:52.606904  186170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-089993 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-089993]
	I1028 12:15:52.715475  186170 provision.go:177] copyRemoteCerts
	I1028 12:15:52.715531  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:15:52.715556  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.718456  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718758  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.718801  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.718993  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.719189  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.719339  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.719461  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:52.802994  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:15:52.832311  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1028 12:15:52.864304  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:15:52.892143  186170 provision.go:87] duration metric: took 292.108499ms to configureAuth
	I1028 12:15:52.892178  186170 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:15:52.892401  186170 config.go:182] Loaded profile config "old-k8s-version-089993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1028 12:15:52.892499  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:52.895607  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.895996  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:52.896031  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:52.896198  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:52.896442  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896615  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:52.896796  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:52.897005  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:52.897225  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:52.897249  186170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:15:53.144636  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:15:53.144668  186170 machine.go:96] duration metric: took 886.451205ms to provisionDockerMachine
	I1028 12:15:53.144683  186170 start.go:293] postStartSetup for "old-k8s-version-089993" (driver="kvm2")
	I1028 12:15:53.144701  186170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:15:53.144739  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.145102  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:15:53.145135  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.147486  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147776  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.147805  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.147926  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.148136  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.148297  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.148438  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.228968  186170 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:15:53.233756  186170 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:15:53.233783  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:15:53.233862  186170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:15:53.233981  186170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:15:53.234114  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:15:53.244314  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:15:53.273027  186170 start.go:296] duration metric: took 128.321696ms for postStartSetup
	I1028 12:15:53.273067  186170 fix.go:56] duration metric: took 19.922145767s for fixHost
	I1028 12:15:53.273087  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.275762  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276036  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.276069  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.276227  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.276431  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276610  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.276759  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.276948  186170 main.go:141] libmachine: Using SSH client type: native
	I1028 12:15:53.277130  186170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1028 12:15:53.277140  186170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:15:53.378422  186547 start.go:364] duration metric: took 2m50.816229865s to acquireMachinesLock for "default-k8s-diff-port-349222"
	I1028 12:15:53.378482  186547 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:15:53.378491  186547 fix.go:54] fixHost starting: 
	I1028 12:15:53.378917  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:15:53.378971  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:15:53.395967  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I1028 12:15:53.396434  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:15:53.396923  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:15:53.396950  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:15:53.397332  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:15:53.397552  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:15:53.397726  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:15:53.399287  186547 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349222: state=Stopped err=<nil>
	I1028 12:15:53.399337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	W1028 12:15:53.399468  186547 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:15:53.401446  186547 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-349222" ...
	I1028 12:15:53.378277  186170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117753.349360033
	
	I1028 12:15:53.378307  186170 fix.go:216] guest clock: 1730117753.349360033
	I1028 12:15:53.378327  186170 fix.go:229] Guest: 2024-10-28 12:15:53.349360033 +0000 UTC Remote: 2024-10-28 12:15:53.273071551 +0000 UTC m=+234.997009775 (delta=76.288482ms)
	I1028 12:15:53.378346  186170 fix.go:200] guest clock delta is within tolerance: 76.288482ms
	I1028 12:15:53.378351  186170 start.go:83] releasing machines lock for "old-k8s-version-089993", held for 20.027466326s
	I1028 12:15:53.378379  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.378640  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:53.381602  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.381951  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.381980  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.382165  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382654  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382864  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .DriverName
	I1028 12:15:53.382949  186170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:15:53.382997  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.383090  186170 ssh_runner.go:195] Run: cat /version.json
	I1028 12:15:53.383109  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHHostname
	I1028 12:15:53.385829  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.385926  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386223  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386272  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386303  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:53.386343  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:53.386522  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386692  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.386704  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHPort
	I1028 12:15:53.386849  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387012  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHKeyPath
	I1028 12:15:53.387009  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.387217  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetSSHUsername
	I1028 12:15:53.387355  186170 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/old-k8s-version-089993/id_rsa Username:docker}
	I1028 12:15:53.462736  186170 ssh_runner.go:195] Run: systemctl --version
	I1028 12:15:53.490076  186170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:15:53.637493  186170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:15:53.643609  186170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:15:53.643668  186170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:15:53.660695  186170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:15:53.660725  186170 start.go:495] detecting cgroup driver to use...
	I1028 12:15:53.660797  186170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:15:53.677283  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:15:53.691838  186170 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:15:53.691914  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:15:53.706354  186170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:15:53.721257  186170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:15:53.843177  186170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:15:54.012260  186170 docker.go:233] disabling docker service ...
	I1028 12:15:54.012369  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:15:54.028355  186170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:15:54.042371  186170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:15:54.175559  186170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:15:54.308690  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:15:54.323918  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:15:54.343000  186170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1028 12:15:54.343072  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.354540  186170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:15:54.354620  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.366058  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.377720  186170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:15:54.388649  186170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:15:54.401499  186170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:15:54.414177  186170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:15:54.414250  186170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:15:54.429049  186170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:15:54.441955  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:15:54.588173  186170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:15:54.686671  186170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:15:54.686732  186170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:15:54.692246  186170 start.go:563] Will wait 60s for crictl version
	I1028 12:15:54.692303  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:15:54.697056  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:15:54.749343  186170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:15:54.749410  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.783554  186170 ssh_runner.go:195] Run: crio --version
	I1028 12:15:54.817295  186170 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1028 12:15:52.838774  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.811974  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:15:53.811997  185942 pod_ready.go:82] duration metric: took 3.00700476s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:53.812008  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:55.824400  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:15:53.402920  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Start
	I1028 12:15:53.403172  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring networks are active...
	I1028 12:15:53.403912  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network default is active
	I1028 12:15:53.404195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Ensuring network mk-default-k8s-diff-port-349222 is active
	I1028 12:15:53.404615  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Getting domain xml...
	I1028 12:15:53.405554  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Creating domain...
	I1028 12:15:54.734540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting to get IP...
	I1028 12:15:54.735417  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735784  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:54.735880  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:54.735759  187305 retry.go:31] will retry after 268.036011ms: waiting for machine to come up
	I1028 12:15:55.005376  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.005999  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.006032  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.005930  187305 retry.go:31] will retry after 255.477665ms: waiting for machine to come up
	I1028 12:15:55.263500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264118  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.264153  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.264087  187305 retry.go:31] will retry after 354.942061ms: waiting for machine to come up
	I1028 12:15:55.620877  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621664  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:55.621698  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:55.621610  187305 retry.go:31] will retry after 569.620755ms: waiting for machine to come up
	I1028 12:15:56.192393  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192872  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.192907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.192803  187305 retry.go:31] will retry after 703.637263ms: waiting for machine to come up
	I1028 12:15:56.897762  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898304  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:56.898337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:56.898214  187305 retry.go:31] will retry after 713.628482ms: waiting for machine to come up
	I1028 12:15:54.818674  186170 main.go:141] libmachine: (old-k8s-version-089993) Calling .GetIP
	I1028 12:15:54.822118  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822477  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:95:38", ip: ""} in network mk-old-k8s-version-089993: {Iface:virbr1 ExpiryTime:2024-10-28 13:15:45 +0000 UTC Type:0 Mac:52:54:00:50:95:38 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-089993 Clientid:01:52:54:00:50:95:38}
	I1028 12:15:54.822508  186170 main.go:141] libmachine: (old-k8s-version-089993) DBG | domain old-k8s-version-089993 has defined IP address 192.168.61.119 and MAC address 52:54:00:50:95:38 in network mk-old-k8s-version-089993
	I1028 12:15:54.822713  186170 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1028 12:15:54.827066  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:15:54.839718  186170 kubeadm.go:883] updating cluster {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:15:54.839871  186170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 12:15:54.839932  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:54.896582  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:54.896647  186170 ssh_runner.go:195] Run: which lz4
	I1028 12:15:54.901264  186170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:15:54.905758  186170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:15:54.905798  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1028 12:15:56.763719  186170 crio.go:462] duration metric: took 1.862485619s to copy over tarball
	I1028 12:15:56.763807  186170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:15:58.321600  185942 pod_ready.go:103] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:00.018244  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.018285  185942 pod_ready.go:82] duration metric: took 6.206271068s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.018297  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028610  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.028638  185942 pod_ready.go:82] duration metric: took 10.334289ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.028653  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041057  185942 pod_ready.go:93] pod "kube-proxy-dl7xq" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.041091  185942 pod_ready.go:82] duration metric: took 12.429027ms for pod "kube-proxy-dl7xq" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.041106  185942 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049617  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:00.049645  185942 pod_ready.go:82] duration metric: took 8.529436ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:00.049659  185942 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	I1028 12:15:57.613338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613844  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:57.613873  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:57.613796  187305 retry.go:31] will retry after 1.188479203s: waiting for machine to come up
	I1028 12:15:58.803300  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803690  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:15:58.803724  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:15:58.803650  187305 retry.go:31] will retry after 1.439057212s: waiting for machine to come up
	I1028 12:16:00.244665  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245201  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:00.245239  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:00.245141  187305 retry.go:31] will retry after 1.842038011s: waiting for machine to come up
	I1028 12:16:02.090283  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090879  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:02.090907  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:02.090828  187305 retry.go:31] will retry after 1.556155538s: waiting for machine to come up
	I1028 12:15:59.824110  186170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060253776s)
	I1028 12:15:59.824148  186170 crio.go:469] duration metric: took 3.060398276s to extract the tarball
	I1028 12:15:59.824158  186170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:15:59.871783  186170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:15:59.913216  186170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1028 12:15:59.913249  186170 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:15:59.913338  186170 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.913374  186170 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.913404  186170 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.913415  186170 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.913435  186170 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.913459  186170 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.913378  186170 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1028 12:15:59.913372  186170 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:15:59.914923  186170 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:15:59.914935  186170 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:15:59.914944  186170 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:15:59.914952  186170 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:15:59.914924  186170 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1028 12:15:59.915002  186170 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1028 12:15:59.915023  186170 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.107392  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.125355  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.128498  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.134762  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.138350  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.141722  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1028 12:16:00.186291  186170 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1028 12:16:00.186340  186170 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.186404  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253168  186170 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1028 12:16:00.253211  186170 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.253256  186170 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1028 12:16:00.253279  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.253288  186170 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.253328  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290772  186170 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1028 12:16:00.290817  186170 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.290857  186170 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1028 12:16:00.290890  186170 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1028 12:16:00.290869  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290913  186170 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1028 12:16:00.290946  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.290970  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.290896  186170 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.291016  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.291049  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.291080  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.317629  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.377316  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.377376  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.377463  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.377430  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.377515  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.488216  186170 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1028 12:16:00.488279  186170 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.488337  186170 ssh_runner.go:195] Run: which crictl
	I1028 12:16:00.513051  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.556242  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.556277  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.556380  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1028 12:16:00.556435  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1028 12:16:00.556544  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1028 12:16:00.556560  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.634253  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1028 12:16:00.737688  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1028 12:16:00.737739  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1028 12:16:00.737799  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1028 12:16:00.737870  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1028 12:16:00.737897  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1028 12:16:00.738000  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.832218  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1028 12:16:00.832247  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1028 12:16:00.832284  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1028 12:16:00.844460  186170 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1028 12:16:00.880788  186170 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1028 12:16:01.121687  186170 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:01.269970  186170 cache_images.go:92] duration metric: took 1.356701981s to LoadCachedImages
	W1028 12:16:01.270091  186170 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1028 12:16:01.270114  186170 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1028 12:16:01.270229  186170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-089993 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:01.270317  186170 ssh_runner.go:195] Run: crio config
	I1028 12:16:01.330579  186170 cni.go:84] Creating CNI manager for ""
	I1028 12:16:01.330604  186170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:01.330615  186170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:01.330634  186170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-089993 NodeName:old-k8s-version-089993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1028 12:16:01.330861  186170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-089993"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:01.330940  186170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1028 12:16:01.342449  186170 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:01.342542  186170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:01.354804  186170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1028 12:16:01.373823  186170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:01.393848  186170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1028 12:16:01.414537  186170 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:01.419057  186170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:01.434491  186170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:01.605220  186170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:01.629171  186170 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993 for IP: 192.168.61.119
	I1028 12:16:01.629198  186170 certs.go:194] generating shared ca certs ...
	I1028 12:16:01.629223  186170 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:01.629411  186170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:01.629473  186170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:01.629486  186170 certs.go:256] generating profile certs ...
	I1028 12:16:01.629625  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.key
	I1028 12:16:01.629692  186170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key.609c03ee
	I1028 12:16:01.629740  186170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key
	I1028 12:16:01.629886  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:01.629929  186170 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:01.629943  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:01.629984  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:01.630025  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:01.630060  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:01.630113  186170 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:01.630911  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:01.673352  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:01.705371  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:01.731174  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:01.775555  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1028 12:16:01.809878  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:01.842241  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:01.876753  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1028 12:16:01.914897  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:01.945991  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:01.977763  186170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:02.010010  186170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:02.034184  186170 ssh_runner.go:195] Run: openssl version
	I1028 12:16:02.042784  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:02.055148  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060669  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.060751  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:02.067345  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:02.079427  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:02.091613  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.096996  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.097061  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:02.103561  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:02.115762  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:02.128405  186170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133889  186170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.133961  186170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:02.140274  186170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:02.155800  186170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:02.162343  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:02.170755  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:02.179332  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:02.187694  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:02.196183  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:02.204538  186170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:02.212604  186170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-089993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-089993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:02.212715  186170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:02.212796  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.260250  186170 cri.go:89] found id: ""
	I1028 12:16:02.260350  186170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:02.274246  186170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:02.274269  186170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:02.274335  186170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:02.287972  186170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:02.288983  186170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-089993" does not appear in /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:16:02.289661  186170 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-132631/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-089993" cluster setting kubeconfig missing "old-k8s-version-089993" context setting]
	I1028 12:16:02.290778  186170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:02.292747  186170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:02.306303  186170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1028 12:16:02.306357  186170 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:02.306375  186170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:02.306438  186170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:02.348962  186170 cri.go:89] found id: ""
	I1028 12:16:02.349041  186170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:02.366483  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:02.377667  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:02.377690  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:02.377758  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:02.387857  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:02.387915  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:02.398137  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:02.408922  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:02.408992  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:02.419044  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.428952  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:02.429020  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:02.439488  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:02.450112  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:02.450174  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:02.461051  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:02.472059  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.607734  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:02.165378  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:04.555857  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:03.648337  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648760  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:03.648789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:03.648736  187305 retry.go:31] will retry after 2.586516153s: waiting for machine to come up
	I1028 12:16:06.236934  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237402  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:06.237433  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:06.237352  187305 retry.go:31] will retry after 3.507901898s: waiting for machine to come up
	I1028 12:16:03.452795  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.710145  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.811788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:03.903114  186170 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:03.903247  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.403775  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:04.904258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.403398  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:05.903353  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.403907  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.903762  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.403316  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:07.904259  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:06.557581  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.056276  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:09.746980  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747449  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | unable to find current IP address of domain default-k8s-diff-port-349222 in network mk-default-k8s-diff-port-349222
	I1028 12:16:09.747482  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | I1028 12:16:09.747401  187305 retry.go:31] will retry after 4.499585546s: waiting for machine to come up
	I1028 12:16:08.403804  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:08.903726  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.404155  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:09.903968  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.403990  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:10.903742  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.403836  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:11.904088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.403293  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:12.903635  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.487114  185546 start.go:364] duration metric: took 56.6590668s to acquireMachinesLock for "no-preload-871884"
	I1028 12:16:15.487176  185546 start.go:96] Skipping create...Using existing machine configuration
	I1028 12:16:15.487190  185546 fix.go:54] fixHost starting: 
	I1028 12:16:15.487650  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:16:15.487713  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:16:15.508857  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1028 12:16:15.509318  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:16:15.510000  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:16:15.510037  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:16:15.510385  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:16:15.510599  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:15.510779  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:16:15.512738  185546 fix.go:112] recreateIfNeeded on no-preload-871884: state=Stopped err=<nil>
	I1028 12:16:15.512772  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	W1028 12:16:15.512963  185546 fix.go:138] unexpected machine state, will restart: <nil>
	I1028 12:16:15.514890  185546 out.go:177] * Restarting existing kvm2 VM for "no-preload-871884" ...
	I1028 12:16:11.056427  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:13.058549  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.556621  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:15.516551  185546 main.go:141] libmachine: (no-preload-871884) Calling .Start
	I1028 12:16:15.516786  185546 main.go:141] libmachine: (no-preload-871884) Ensuring networks are active...
	I1028 12:16:15.517934  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network default is active
	I1028 12:16:15.518543  185546 main.go:141] libmachine: (no-preload-871884) Ensuring network mk-no-preload-871884 is active
	I1028 12:16:15.519028  185546 main.go:141] libmachine: (no-preload-871884) Getting domain xml...
	I1028 12:16:15.519878  185546 main.go:141] libmachine: (no-preload-871884) Creating domain...
	I1028 12:16:14.249128  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249645  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has current primary IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.249674  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Found IP for machine: 192.168.50.75
	I1028 12:16:14.249689  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserving static IP address...
	I1028 12:16:14.250120  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Reserved static IP address: 192.168.50.75
	I1028 12:16:14.250139  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Waiting for SSH to be available...
	I1028 12:16:14.250164  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.250205  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | skip adding static IP to network mk-default-k8s-diff-port-349222 - found existing host DHCP lease matching {name: "default-k8s-diff-port-349222", mac: "52:54:00:90:bc:cf", ip: "192.168.50.75"}
	I1028 12:16:14.250222  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Getting to WaitForSSH function...
	I1028 12:16:14.252540  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.252883  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.252926  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.253035  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH client type: external
	I1028 12:16:14.253075  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa (-rw-------)
	I1028 12:16:14.253100  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:14.253115  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | About to run SSH command:
	I1028 12:16:14.253129  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | exit 0
	I1028 12:16:14.373688  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:14.374101  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetConfigRaw
	I1028 12:16:14.374713  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.377338  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.377824  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.377857  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.378094  186547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/config.json ...
	I1028 12:16:14.378326  186547 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:14.378345  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:14.378556  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.380694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.380976  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.380992  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.381143  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.381356  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381521  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.381678  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.381882  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.382107  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.382119  186547 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:14.490030  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:14.490061  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490303  186547 buildroot.go:166] provisioning hostname "default-k8s-diff-port-349222"
	I1028 12:16:14.490331  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.490523  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.492989  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493395  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.493426  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.493626  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.493794  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.493960  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.494104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.494258  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.494427  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.494439  186547 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-349222 && echo "default-k8s-diff-port-349222" | sudo tee /etc/hostname
	I1028 12:16:14.604373  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-349222
	
	I1028 12:16:14.604405  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.607135  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607437  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.607465  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.607658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.607891  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608060  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.608187  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.608353  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:14.608549  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:14.608569  186547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-349222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-349222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-349222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:14.714933  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:14.714963  186547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:14.714990  186547 buildroot.go:174] setting up certificates
	I1028 12:16:14.714998  186547 provision.go:84] configureAuth start
	I1028 12:16:14.715007  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetMachineName
	I1028 12:16:14.715321  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:14.718051  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.718406  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.718504  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.720638  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.720945  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.720972  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.721127  186547 provision.go:143] copyHostCerts
	I1028 12:16:14.721198  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:14.721213  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:14.721283  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:14.721407  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:14.721417  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:14.721446  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:14.721522  186547 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:14.721544  186547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:14.721571  186547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:14.721634  186547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-349222 san=[127.0.0.1 192.168.50.75 default-k8s-diff-port-349222 localhost minikube]
	I1028 12:16:14.854227  186547 provision.go:177] copyRemoteCerts
	I1028 12:16:14.854285  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:14.854314  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:14.857250  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857590  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:14.857620  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:14.857897  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:14.858091  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:14.858286  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:14.858434  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:14.940752  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:14.967575  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1028 12:16:14.992693  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1028 12:16:15.017801  186547 provision.go:87] duration metric: took 302.790563ms to configureAuth
	I1028 12:16:15.017831  186547 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:15.018073  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:15.018168  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.021181  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.021574  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.021719  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.021894  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022113  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.022317  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.022564  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.022744  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.022761  186547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:15.257308  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:15.257339  186547 machine.go:96] duration metric: took 878.998573ms to provisionDockerMachine
	I1028 12:16:15.257350  186547 start.go:293] postStartSetup for "default-k8s-diff-port-349222" (driver="kvm2")
	I1028 12:16:15.257360  186547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:15.257378  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.257695  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:15.257721  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.260380  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260767  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.260795  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.260990  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.261186  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.261370  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.261513  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.341376  186547 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:15.345736  186547 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:15.345760  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:15.345820  186547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:15.345891  186547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:15.345978  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:15.355662  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:15.381750  186547 start.go:296] duration metric: took 124.385481ms for postStartSetup
	I1028 12:16:15.381788  186547 fix.go:56] duration metric: took 22.00329785s for fixHost
	I1028 12:16:15.381807  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.384756  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385099  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.385130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.385359  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.385587  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385782  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.385974  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.386165  186547 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:15.386345  186547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.50.75 22 <nil> <nil>}
	I1028 12:16:15.386355  186547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:15.486905  186547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117775.445749296
	
	I1028 12:16:15.486934  186547 fix.go:216] guest clock: 1730117775.445749296
	I1028 12:16:15.486944  186547 fix.go:229] Guest: 2024-10-28 12:16:15.445749296 +0000 UTC Remote: 2024-10-28 12:16:15.381791731 +0000 UTC m=+192.967058764 (delta=63.957565ms)
	I1028 12:16:15.487005  186547 fix.go:200] guest clock delta is within tolerance: 63.957565ms
	I1028 12:16:15.487018  186547 start.go:83] releasing machines lock for "default-k8s-diff-port-349222", held for 22.108560462s
	I1028 12:16:15.487082  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.487382  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:15.490840  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491343  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.491374  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.491528  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492208  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492431  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:16:15.492581  186547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:15.492657  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.492706  186547 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:15.492746  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:16:15.496062  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496119  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496520  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496544  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:15.496694  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:15.496901  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497225  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497257  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:16:15.497458  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497583  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:16:15.497665  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.497798  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:16:15.497977  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:16:15.590741  186547 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:15.615347  186547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:15.762979  186547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:15.770132  186547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:15.770221  186547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:15.788651  186547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:15.788684  186547 start.go:495] detecting cgroup driver to use...
	I1028 12:16:15.788751  186547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:15.806118  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:15.820916  186547 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:15.820986  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:15.835770  186547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:15.850994  186547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:15.979465  186547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:16.160837  186547 docker.go:233] disabling docker service ...
	I1028 12:16:16.160924  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:16.177934  186547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:16.194616  186547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:16.320605  186547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:16.464175  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:16.479626  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:16.502747  186547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:16.502889  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.514636  186547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:16.514695  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.528137  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.539961  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.552263  186547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:16.566275  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.578632  186547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.599084  186547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:16.611250  186547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:16.621976  186547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:16.622052  186547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:16.640800  186547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:16.651767  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:16.806628  186547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:16.903584  186547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:16.903655  186547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:16.909873  186547 start.go:563] Will wait 60s for crictl version
	I1028 12:16:16.909950  186547 ssh_runner.go:195] Run: which crictl
	I1028 12:16:16.915388  186547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:16.964424  186547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:16.964517  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:16.997415  186547 ssh_runner.go:195] Run: crio --version
	I1028 12:16:17.032323  186547 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:17.033747  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetIP
	I1028 12:16:17.036500  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.036903  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:16:17.036935  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:16:17.037118  186547 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:17.041698  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:17.056649  186547 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:17.056792  186547 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:17.056840  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:17.099143  186547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:17.099233  186547 ssh_runner.go:195] Run: which lz4
	I1028 12:16:17.103882  186547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1028 12:16:17.108660  186547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1028 12:16:17.108699  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1028 12:16:13.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:13.903443  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.404017  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:14.903385  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.403903  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:15.904106  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.403713  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:16.903397  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.404299  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.903855  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:17.559178  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:19.560739  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:16.842086  185546 main.go:141] libmachine: (no-preload-871884) Waiting to get IP...
	I1028 12:16:16.843056  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:16.843514  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:16.843599  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:16.843484  187500 retry.go:31] will retry after 240.188984ms: waiting for machine to come up
	I1028 12:16:17.085193  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.085702  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.085739  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.085649  187500 retry.go:31] will retry after 361.44193ms: waiting for machine to come up
	I1028 12:16:17.448961  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.449619  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.449645  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.449576  187500 retry.go:31] will retry after 386.179326ms: waiting for machine to come up
	I1028 12:16:17.837097  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:17.837879  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:17.837907  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:17.837834  187500 retry.go:31] will retry after 531.12665ms: waiting for machine to come up
	I1028 12:16:18.370266  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:18.370803  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:18.370834  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:18.370746  187500 retry.go:31] will retry after 760.20134ms: waiting for machine to come up
	I1028 12:16:19.132853  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.133415  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.133444  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.133360  187500 retry.go:31] will retry after 817.773678ms: waiting for machine to come up
	I1028 12:16:19.952317  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:19.952800  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:19.952824  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:19.952760  187500 retry.go:31] will retry after 861.798605ms: waiting for machine to come up
	I1028 12:16:20.816156  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:20.816794  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:20.816826  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:20.816750  187500 retry.go:31] will retry after 908.062214ms: waiting for machine to come up
	I1028 12:16:18.686980  186547 crio.go:462] duration metric: took 1.583134893s to copy over tarball
	I1028 12:16:18.687053  186547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1028 12:16:21.016264  186547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329174428s)
	I1028 12:16:21.016309  186547 crio.go:469] duration metric: took 2.329300291s to extract the tarball
	I1028 12:16:21.016322  186547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1028 12:16:21.053950  186547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:21.112876  186547 crio.go:514] all images are preloaded for cri-o runtime.
	I1028 12:16:21.112903  186547 cache_images.go:84] Images are preloaded, skipping loading
	I1028 12:16:21.112914  186547 kubeadm.go:934] updating node { 192.168.50.75 8444 v1.31.2 crio true true} ...
	I1028 12:16:21.113037  186547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-349222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:21.113119  186547 ssh_runner.go:195] Run: crio config
	I1028 12:16:21.179853  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:21.179877  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:21.179888  186547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:21.179907  186547 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.75 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-349222 NodeName:default-k8s-diff-port-349222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:21.180039  186547 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.75
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-349222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.75"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.75"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:21.180117  186547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:21.191650  186547 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:21.191721  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:21.201670  186547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1028 12:16:21.220426  186547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:21.240774  186547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1028 12:16:21.263336  186547 ssh_runner.go:195] Run: grep 192.168.50.75	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:21.267818  186547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:21.281577  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:21.441517  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:21.464117  186547 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222 for IP: 192.168.50.75
	I1028 12:16:21.464145  186547 certs.go:194] generating shared ca certs ...
	I1028 12:16:21.464167  186547 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:21.464392  186547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:21.464458  186547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:21.464485  186547 certs.go:256] generating profile certs ...
	I1028 12:16:21.464599  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.key
	I1028 12:16:21.464691  186547 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key.e54e33e0
	I1028 12:16:21.464749  186547 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key
	I1028 12:16:21.464919  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:21.464967  186547 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:21.464981  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:21.465006  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:21.465031  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:21.465069  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:21.465124  186547 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:21.465976  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:21.511145  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:21.572071  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:21.613442  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:21.655508  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1028 12:16:21.687378  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1028 12:16:21.713227  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:21.738909  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:21.765274  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:21.792427  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:21.817632  186547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:21.842996  186547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:21.861059  186547 ssh_runner.go:195] Run: openssl version
	I1028 12:16:21.867814  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:21.880769  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886245  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.886325  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:21.893179  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:21.908974  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:21.926992  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932350  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.932428  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:21.939073  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:21.952302  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:21.965485  186547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971486  186547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.971564  186547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:21.978531  186547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:21.995399  186547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:22.001453  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:22.009449  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:22.016898  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:22.024410  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:22.033151  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:22.040981  186547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:22.048298  186547 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-349222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-349222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:22.048441  186547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:22.048531  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.095210  186547 cri.go:89] found id: ""
	I1028 12:16:22.095319  186547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:22.111740  186547 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:22.111772  186547 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:22.111828  186547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:22.122472  186547 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:22.123648  186547 kubeconfig.go:125] found "default-k8s-diff-port-349222" server: "https://192.168.50.75:8444"
	I1028 12:16:22.126117  186547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:22.137057  186547 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.75
	I1028 12:16:22.137096  186547 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:22.137108  186547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:22.137179  186547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:22.180526  186547 cri.go:89] found id: ""
	I1028 12:16:22.180638  186547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:22.197697  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:22.208176  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:22.208197  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:22.208246  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:16:22.218379  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:22.218438  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:22.228844  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:16:22.239330  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:22.239407  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:22.250200  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.260309  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:22.260374  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:22.271041  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:16:22.281556  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:22.281637  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:22.294003  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:22.305123  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:22.426791  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:18.403494  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:18.903364  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.403869  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:19.904257  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.404252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:20.904028  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.404218  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:21.903631  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.403882  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.904188  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:22.058068  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:24.059822  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:21.726767  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:21.727332  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:21.727373  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:21.727224  187500 retry.go:31] will retry after 1.684184533s: waiting for machine to come up
	I1028 12:16:23.412691  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:23.413228  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:23.413254  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:23.413177  187500 retry.go:31] will retry after 1.416062445s: waiting for machine to come up
	I1028 12:16:24.830846  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:24.831450  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:24.831480  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:24.831393  187500 retry.go:31] will retry after 2.716897952s: waiting for machine to come up
	I1028 12:16:23.288371  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.506229  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.575063  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:23.644776  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:23.644896  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.145579  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.645050  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.666456  186547 api_server.go:72] duration metric: took 1.021679294s to wait for apiserver process to appear ...
	I1028 12:16:24.666493  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:24.666518  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:24.667086  186547 api_server.go:269] stopped: https://192.168.50.75:8444/healthz: Get "https://192.168.50.75:8444/healthz": dial tcp 192.168.50.75:8444: connect: connection refused
	I1028 12:16:25.166765  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:23.404152  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:23.904225  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.403333  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:24.904323  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.404279  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:25.904317  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.404253  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:26.904083  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.403621  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:27.903752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.336957  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.337000  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.337015  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.382075  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:16:28.382110  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:16:28.667083  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:28.671910  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:28.671935  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.167591  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.173364  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:16:29.173397  186547 api_server.go:103] status: https://192.168.50.75:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:16:29.666902  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:16:29.672205  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:16:29.679964  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:16:29.680002  186547 api_server.go:131] duration metric: took 5.013500479s to wait for apiserver health ...
	I1028 12:16:29.680014  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:16:29.680032  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:29.681992  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:26.558629  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.560116  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:27.550893  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:27.551454  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:27.551476  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:27.551438  187500 retry.go:31] will retry after 2.986712877s: waiting for machine to come up
	I1028 12:16:30.539999  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:30.540601  185546 main.go:141] libmachine: (no-preload-871884) DBG | unable to find current IP address of domain no-preload-871884 in network mk-no-preload-871884
	I1028 12:16:30.540632  185546 main.go:141] libmachine: (no-preload-871884) DBG | I1028 12:16:30.540526  187500 retry.go:31] will retry after 3.947007446s: waiting for machine to come up
	I1028 12:16:29.683325  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:16:29.697362  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:16:29.717296  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:16:29.726327  186547 system_pods.go:59] 8 kube-system pods found
	I1028 12:16:29.726363  186547 system_pods.go:61] "coredns-7c65d6cfc9-k5h7n" [e203fcce-1a8a-431b-a816-d75b33ca9417] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:16:29.726374  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [2214daee-0302-44cd-9297-836eeb011232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:16:29.726391  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [c4331c24-07e2-4b50-ab04-31bcd00960e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:16:29.726402  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [9dddd9fb-ad03-4771-af1b-d9e1e024af52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:16:29.726413  186547 system_pods.go:61] "kube-proxy-bqq65" [ed5d0c14-0ddb-4446-a2f7-ae457d629fb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1028 12:16:29.726423  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [9cfcc366-038f-43a9-b919-48742fa419af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:16:29.726434  186547 system_pods.go:61] "metrics-server-6867b74b74-cgkz9" [3d919412-efb8-4030-a5d0-3c325c824c48] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:16:29.726445  186547 system_pods.go:61] "storage-provisioner" [613b003c-1eee-4294-947f-ea7a21edc8d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:16:29.726464  186547 system_pods.go:74] duration metric: took 9.135782ms to wait for pod list to return data ...
	I1028 12:16:29.726478  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:16:29.729971  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:16:29.729996  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:16:29.730009  186547 node_conditions.go:105] duration metric: took 3.525858ms to run NodePressure ...
	I1028 12:16:29.730035  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:30.043775  186547 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048614  186547 kubeadm.go:739] kubelet initialised
	I1028 12:16:30.048638  186547 kubeadm.go:740] duration metric: took 4.83853ms waiting for restarted kubelet to initialise ...
	I1028 12:16:30.048647  186547 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:16:30.053908  186547 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:32.063283  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:28.404110  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:28.904058  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.404042  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:29.903819  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.404114  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:30.904140  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.404241  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.903586  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.403858  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:32.903566  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:31.057577  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:33.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:35.557338  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:34.491658  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492175  185546 main.go:141] libmachine: (no-preload-871884) Found IP for machine: 192.168.72.156
	I1028 12:16:34.492197  185546 main.go:141] libmachine: (no-preload-871884) Reserving static IP address...
	I1028 12:16:34.492215  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has current primary IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.492674  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.492704  185546 main.go:141] libmachine: (no-preload-871884) Reserved static IP address: 192.168.72.156
	I1028 12:16:34.492739  185546 main.go:141] libmachine: (no-preload-871884) DBG | skip adding static IP to network mk-no-preload-871884 - found existing host DHCP lease matching {name: "no-preload-871884", mac: "52:54:00:d0:ce:7e", ip: "192.168.72.156"}
	I1028 12:16:34.492763  185546 main.go:141] libmachine: (no-preload-871884) DBG | Getting to WaitForSSH function...
	I1028 12:16:34.492777  185546 main.go:141] libmachine: (no-preload-871884) Waiting for SSH to be available...
	I1028 12:16:34.495176  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495502  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.495536  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.495682  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH client type: external
	I1028 12:16:34.495714  185546 main.go:141] libmachine: (no-preload-871884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa (-rw-------)
	I1028 12:16:34.495747  185546 main.go:141] libmachine: (no-preload-871884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1028 12:16:34.495770  185546 main.go:141] libmachine: (no-preload-871884) DBG | About to run SSH command:
	I1028 12:16:34.495796  185546 main.go:141] libmachine: (no-preload-871884) DBG | exit 0
	I1028 12:16:34.625650  185546 main.go:141] libmachine: (no-preload-871884) DBG | SSH cmd err, output: <nil>: 
	I1028 12:16:34.625959  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetConfigRaw
	I1028 12:16:34.626602  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.629137  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629442  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.629477  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.629733  185546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/config.json ...
	I1028 12:16:34.629938  185546 machine.go:93] provisionDockerMachine start ...
	I1028 12:16:34.629957  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:34.630153  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.632415  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.632777  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.632804  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.633033  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.633247  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633422  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.633592  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.633762  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.633954  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.633968  185546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1028 12:16:34.738368  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1028 12:16:34.738406  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738696  185546 buildroot.go:166] provisioning hostname "no-preload-871884"
	I1028 12:16:34.738729  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.738926  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.741750  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742216  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.742322  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.742339  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.742538  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742689  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.742857  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.743032  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.743248  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.743266  185546 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-871884 && echo "no-preload-871884" | sudo tee /etc/hostname
	I1028 12:16:34.863767  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-871884
	
	I1028 12:16:34.863802  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.867136  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867530  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.867561  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.867822  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:34.868039  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868251  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:34.868430  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:34.868634  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:34.868880  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:34.868905  185546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-871884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-871884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-871884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1028 12:16:34.989420  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1028 12:16:34.989450  185546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19876-132631/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-132631/.minikube}
	I1028 12:16:34.989468  185546 buildroot.go:174] setting up certificates
	I1028 12:16:34.989476  185546 provision.go:84] configureAuth start
	I1028 12:16:34.989485  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetMachineName
	I1028 12:16:34.989790  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:34.992627  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.992977  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.993007  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.993225  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:34.995586  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.995888  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:34.995911  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:34.996122  185546 provision.go:143] copyHostCerts
	I1028 12:16:34.996190  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem, removing ...
	I1028 12:16:34.996204  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem
	I1028 12:16:34.996261  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/ca.pem (1078 bytes)
	I1028 12:16:34.996375  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem, removing ...
	I1028 12:16:34.996384  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem
	I1028 12:16:34.996408  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/cert.pem (1123 bytes)
	I1028 12:16:34.996472  185546 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem, removing ...
	I1028 12:16:34.996482  185546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem
	I1028 12:16:34.996499  185546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-132631/.minikube/key.pem (1679 bytes)
	I1028 12:16:34.996559  185546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem org=jenkins.no-preload-871884 san=[127.0.0.1 192.168.72.156 localhost minikube no-preload-871884]
	I1028 12:16:35.437900  185546 provision.go:177] copyRemoteCerts
	I1028 12:16:35.437961  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1028 12:16:35.437985  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.440936  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441329  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.441361  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.441555  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.441756  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.441921  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.442085  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.524911  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1028 12:16:35.554631  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1028 12:16:35.586946  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1028 12:16:35.620121  185546 provision.go:87] duration metric: took 630.630531ms to configureAuth
	I1028 12:16:35.620155  185546 buildroot.go:189] setting minikube options for container-runtime
	I1028 12:16:35.620395  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:16:35.620502  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.623316  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623607  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.623643  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.623886  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.624099  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624290  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.624433  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.624612  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:35.624794  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:35.624810  185546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1028 12:16:35.886145  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1028 12:16:35.886178  185546 machine.go:96] duration metric: took 1.256224912s to provisionDockerMachine
	I1028 12:16:35.886196  185546 start.go:293] postStartSetup for "no-preload-871884" (driver="kvm2")
	I1028 12:16:35.886209  185546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1028 12:16:35.886232  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:35.886615  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1028 12:16:35.886653  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:35.889615  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890016  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:35.890048  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:35.890266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:35.890459  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:35.890654  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:35.890798  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:35.977889  185546 ssh_runner.go:195] Run: cat /etc/os-release
	I1028 12:16:35.983360  185546 info.go:137] Remote host: Buildroot 2023.02.9
	I1028 12:16:35.983387  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/addons for local assets ...
	I1028 12:16:35.983454  185546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-132631/.minikube/files for local assets ...
	I1028 12:16:35.983543  185546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem -> 1403032.pem in /etc/ssl/certs
	I1028 12:16:35.983674  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1028 12:16:35.997400  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:36.025665  185546 start.go:296] duration metric: took 139.454088ms for postStartSetup
	I1028 12:16:36.025714  185546 fix.go:56] duration metric: took 20.538525254s for fixHost
	I1028 12:16:36.025739  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.028490  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.028933  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.028964  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.029170  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.029386  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029573  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.029734  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.029909  185546 main.go:141] libmachine: Using SSH client type: native
	I1028 12:16:36.030087  185546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 192.168.72.156 22 <nil> <nil>}
	I1028 12:16:36.030098  185546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1028 12:16:36.138559  185546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730117796.101397993
	
	I1028 12:16:36.138589  185546 fix.go:216] guest clock: 1730117796.101397993
	I1028 12:16:36.138599  185546 fix.go:229] Guest: 2024-10-28 12:16:36.101397993 +0000 UTC Remote: 2024-10-28 12:16:36.025719388 +0000 UTC m=+359.787107454 (delta=75.678605ms)
	I1028 12:16:36.138633  185546 fix.go:200] guest clock delta is within tolerance: 75.678605ms
	I1028 12:16:36.138638  185546 start.go:83] releasing machines lock for "no-preload-871884", held for 20.651488254s
	I1028 12:16:36.138663  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.138953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:36.141711  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142144  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.142180  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.142323  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.142975  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143165  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:16:36.143240  185546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1028 12:16:36.143306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.143378  185546 ssh_runner.go:195] Run: cat /version.json
	I1028 12:16:36.143399  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:16:36.145980  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146166  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146348  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146375  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146507  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146617  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:36.146657  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:36.146701  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.146795  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:16:36.146882  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.146953  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:16:36.147013  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.147071  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:16:36.147202  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:16:36.223364  185546 ssh_runner.go:195] Run: systemctl --version
	I1028 12:16:36.246964  185546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1028 12:16:34.561016  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.564296  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:36.396734  185546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1028 12:16:36.403214  185546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1028 12:16:36.403298  185546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1028 12:16:36.421658  185546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1028 12:16:36.421695  185546 start.go:495] detecting cgroup driver to use...
	I1028 12:16:36.421772  185546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1028 12:16:36.441133  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1028 12:16:36.456750  185546 docker.go:217] disabling cri-docker service (if available) ...
	I1028 12:16:36.456806  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1028 12:16:36.473457  185546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1028 12:16:36.489210  185546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1028 12:16:36.621054  185546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1028 12:16:36.767341  185546 docker.go:233] disabling docker service ...
	I1028 12:16:36.767432  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1028 12:16:36.784655  185546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1028 12:16:36.799522  185546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1028 12:16:36.942312  185546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1028 12:16:37.066636  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1028 12:16:37.082284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1028 12:16:37.102462  185546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1028 12:16:37.102530  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.113687  185546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1028 12:16:37.113760  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.125624  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.137036  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.148417  185546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1028 12:16:37.160015  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.171382  185546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.192342  185546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1028 12:16:37.204353  185546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1028 12:16:37.215188  185546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1028 12:16:37.215275  185546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1028 12:16:37.230653  185546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1028 12:16:37.241484  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:37.382996  185546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1028 12:16:37.479263  185546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1028 12:16:37.479363  185546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1028 12:16:37.485265  185546 start.go:563] Will wait 60s for crictl version
	I1028 12:16:37.485330  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:37.489545  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1028 12:16:37.536126  185546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1028 12:16:37.536212  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.567538  185546 ssh_runner.go:195] Run: crio --version
	I1028 12:16:37.600370  185546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1028 12:16:33.404124  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:33.903341  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.403703  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:34.903445  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.404040  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:35.904246  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.403798  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:36.903950  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.403912  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.903423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:37.559329  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:40.057624  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:37.601686  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetIP
	I1028 12:16:37.604235  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604568  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:16:37.604601  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:16:37.604782  185546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1028 12:16:37.609354  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:37.624966  185546 kubeadm.go:883] updating cluster {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1028 12:16:37.625081  185546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 12:16:37.625117  185546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1028 12:16:37.664112  185546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1028 12:16:37.664149  185546 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1028 12:16:37.664262  185546 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.664306  185546 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.664334  185546 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1028 12:16:37.664311  185546 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.664352  185546 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.664393  185546 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.664434  185546 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.664399  185546 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666080  185546 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:37.666083  185546 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.666081  185546 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.666142  185546 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.666085  185546 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1028 12:16:37.666079  185546 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.666185  185546 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.666398  185546 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.840639  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.857089  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:37.859107  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1028 12:16:37.859358  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:37.863640  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:37.867925  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:37.876221  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:37.921581  185546 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1028 12:16:37.921638  185546 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:37.921689  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.042970  185546 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1028 12:16:38.043015  185546 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.043068  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093917  185546 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1028 12:16:38.093954  185546 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1028 12:16:38.093973  185546 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.093985  185546 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.094029  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094038  185546 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1028 12:16:38.094057  185546 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.094087  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.094094  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.094030  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.093976  185546 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1028 12:16:38.094143  185546 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.094152  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.094175  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:38.110134  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.110302  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.188826  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.188922  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.188979  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.193920  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.193929  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.292698  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.325562  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1028 12:16:38.331855  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.332873  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1028 12:16:38.345880  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1028 12:16:38.345951  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1028 12:16:38.414842  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1028 12:16:38.470776  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1028 12:16:38.470949  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1028 12:16:38.471044  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.481197  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1028 12:16:38.481333  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:38.503147  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1028 12:16:38.503171  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:38.503267  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:38.532884  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1028 12:16:38.533001  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:38.552405  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1028 12:16:38.552417  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1028 12:16:38.552472  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552485  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1028 12:16:38.552523  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:38.552529  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1028 12:16:38.552552  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1028 12:16:38.552527  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1028 12:16:38.552598  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1028 12:16:38.829851  185546 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127678  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.575124569s)
	I1028 12:16:41.127722  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1028 12:16:41.127744  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.575188461s)
	I1028 12:16:41.127775  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1028 12:16:41.127785  185546 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.297902587s)
	I1028 12:16:41.127803  185546 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127818  185546 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1028 12:16:41.127850  185546 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:41.127858  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1028 12:16:41.127895  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:16:39.064564  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:41.563643  186547 pod_ready.go:103] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:38.403644  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:38.904220  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.404068  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:39.904158  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.403660  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:40.903678  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.404061  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:41.903568  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.404297  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.904036  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:42.058025  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:44.557594  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.190694  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062807881s)
	I1028 12:16:43.190736  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1028 12:16:43.190752  185546 ssh_runner.go:235] Completed: which crictl: (2.062836368s)
	I1028 12:16:43.190773  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:43.190827  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:43.190831  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1028 12:16:45.281583  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.090685426s)
	I1028 12:16:45.281620  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1028 12:16:45.281650  185546 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281679  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.090821035s)
	I1028 12:16:45.281698  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1028 12:16:45.281750  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:45.325500  185546 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:16:42.565395  186547 pod_ready.go:93] pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.565425  186547 pod_ready.go:82] duration metric: took 12.511487215s for pod "coredns-7c65d6cfc9-k5h7n" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.565438  186547 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572364  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.572388  186547 pod_ready.go:82] duration metric: took 6.941356ms for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.572402  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579074  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.579099  186547 pod_ready.go:82] duration metric: took 6.689137ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.579116  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584088  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.584108  186547 pod_ready.go:82] duration metric: took 4.985095ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.584118  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588810  186547 pod_ready.go:93] pod "kube-proxy-bqq65" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:42.588837  186547 pod_ready.go:82] duration metric: took 4.711896ms for pod "kube-proxy-bqq65" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:42.588849  186547 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758349  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:16:43.758376  186547 pod_ready.go:82] duration metric: took 1.169519383s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:43.758387  186547 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	I1028 12:16:45.766209  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:43.404022  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:43.903570  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.403673  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:44.903585  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.403476  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:45.904069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.403906  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:46.904264  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.903991  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:47.059150  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.556589  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:49.174287  185546 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.84875195s)
	I1028 12:16:49.174340  185546 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1028 12:16:49.174291  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.892568087s)
	I1028 12:16:49.174422  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1028 12:16:49.174427  185546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:49.174466  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:49.174524  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1028 12:16:48.265641  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:50.271513  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:48.404207  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:48.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.404088  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:49.903614  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.403587  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:50.904256  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.404314  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.903794  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.404122  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.903312  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:51.557320  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.557540  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:51.438821  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.26426785s)
	I1028 12:16:51.438857  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1028 12:16:51.438890  185546 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.264449757s)
	I1028 12:16:51.438893  185546 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:51.438911  185546 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1028 12:16:51.438945  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1028 12:16:52.890902  185546 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.451935078s)
	I1028 12:16:52.890933  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1028 12:16:52.890960  185546 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:52.891010  185546 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1028 12:16:53.643145  185546 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19876-132631/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1028 12:16:53.643208  185546 cache_images.go:123] Successfully loaded all cached images
	I1028 12:16:53.643216  185546 cache_images.go:92] duration metric: took 15.979050279s to LoadCachedImages
	I1028 12:16:53.643231  185546 kubeadm.go:934] updating node { 192.168.72.156 8443 v1.31.2 crio true true} ...
	I1028 12:16:53.643393  185546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-871884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1028 12:16:53.643480  185546 ssh_runner.go:195] Run: crio config
	I1028 12:16:53.701778  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:16:53.701805  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:16:53.701814  185546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1028 12:16:53.701836  185546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.156 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-871884 NodeName:no-preload-871884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1028 12:16:53.701952  185546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-871884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1028 12:16:53.702019  185546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1028 12:16:53.714245  185546 binaries.go:44] Found k8s binaries, skipping transfer
	I1028 12:16:53.714327  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1028 12:16:53.725610  185546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1028 12:16:53.745071  185546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1028 12:16:53.766897  185546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1028 12:16:53.787043  185546 ssh_runner.go:195] Run: grep 192.168.72.156	control-plane.minikube.internal$ /etc/hosts
	I1028 12:16:53.791580  185546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1028 12:16:53.805088  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:16:53.945235  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:16:53.964073  185546 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884 for IP: 192.168.72.156
	I1028 12:16:53.964099  185546 certs.go:194] generating shared ca certs ...
	I1028 12:16:53.964115  185546 certs.go:226] acquiring lock for ca certs: {Name:mk53dbfb7389703f8def9344429825adee693423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:16:53.964290  185546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key
	I1028 12:16:53.964338  185546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key
	I1028 12:16:53.964355  185546 certs.go:256] generating profile certs ...
	I1028 12:16:53.964458  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.key
	I1028 12:16:53.964533  185546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key.6934b48e
	I1028 12:16:53.964584  185546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key
	I1028 12:16:53.964719  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem (1338 bytes)
	W1028 12:16:53.964750  185546 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303_empty.pem, impossibly tiny 0 bytes
	I1028 12:16:53.964765  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca-key.pem (1675 bytes)
	I1028 12:16:53.964801  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/ca.pem (1078 bytes)
	I1028 12:16:53.964831  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/cert.pem (1123 bytes)
	I1028 12:16:53.964866  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/certs/key.pem (1679 bytes)
	I1028 12:16:53.964921  185546 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem (1708 bytes)
	I1028 12:16:53.965632  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1028 12:16:54.004592  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1028 12:16:54.044270  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1028 12:16:54.079496  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1028 12:16:54.114473  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1028 12:16:54.141836  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1028 12:16:54.175201  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1028 12:16:54.202282  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1028 12:16:54.227874  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/certs/140303.pem --> /usr/share/ca-certificates/140303.pem (1338 bytes)
	I1028 12:16:54.254818  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/ssl/certs/1403032.pem --> /usr/share/ca-certificates/1403032.pem (1708 bytes)
	I1028 12:16:54.282950  185546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1028 12:16:54.310204  185546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1028 12:16:54.328834  185546 ssh_runner.go:195] Run: openssl version
	I1028 12:16:54.335391  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1028 12:16:54.347474  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352687  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:55 /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.352755  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1028 12:16:54.358834  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1028 12:16:54.373155  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140303.pem && ln -fs /usr/share/ca-certificates/140303.pem /etc/ssl/certs/140303.pem"
	I1028 12:16:54.387035  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392179  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 11:07 /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.392281  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140303.pem
	I1028 12:16:54.398488  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140303.pem /etc/ssl/certs/51391683.0"
	I1028 12:16:54.412352  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403032.pem && ln -fs /usr/share/ca-certificates/1403032.pem /etc/ssl/certs/1403032.pem"
	I1028 12:16:54.426361  185546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431415  185546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 11:07 /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.431470  185546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403032.pem
	I1028 12:16:54.437583  185546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403032.pem /etc/ssl/certs/3ec20f2e.0"
	I1028 12:16:54.450708  185546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1028 12:16:54.456625  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1028 12:16:54.463458  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1028 12:16:54.469939  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1028 12:16:54.477873  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1028 12:16:54.484962  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1028 12:16:54.491679  185546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1028 12:16:54.498106  185546 kubeadm.go:392] StartCluster: {Name:no-preload-871884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-871884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 12:16:54.498211  185546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1028 12:16:54.498287  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.543142  185546 cri.go:89] found id: ""
	I1028 12:16:54.543250  185546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1028 12:16:54.555948  185546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1028 12:16:54.555971  185546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1028 12:16:54.556021  185546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1028 12:16:54.566954  185546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1028 12:16:54.567990  185546 kubeconfig.go:125] found "no-preload-871884" server: "https://192.168.72.156:8443"
	I1028 12:16:54.570149  185546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1028 12:16:54.581005  185546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.156
	I1028 12:16:54.581039  185546 kubeadm.go:1160] stopping kube-system containers ...
	I1028 12:16:54.581051  185546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1028 12:16:54.581100  185546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1028 12:16:54.622676  185546 cri.go:89] found id: ""
	I1028 12:16:54.622742  185546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1028 12:16:54.642427  185546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:16:54.655104  185546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:16:54.655131  185546 kubeadm.go:157] found existing configuration files:
	
	I1028 12:16:54.655199  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:16:54.665367  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:16:54.665432  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:16:54.675664  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:16:54.685921  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:16:54.685997  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:16:54.698451  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.709982  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:16:54.710060  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:16:54.721243  185546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:16:54.731699  185546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:16:54.731780  185546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:16:54.743365  185546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:16:54.754284  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:54.868055  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.645470  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.858805  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:55.940632  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:16:56.020654  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:16:56.020735  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:52.764963  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:54.766822  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.768500  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:53.403716  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:53.903325  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.404326  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:54.903529  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.403679  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:55.903480  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.403429  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.904252  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.403496  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.904315  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:56.058614  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.556085  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:00.556460  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:56.521589  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.021710  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:57.066266  185546 api_server.go:72] duration metric: took 1.045608096s to wait for apiserver process to appear ...
	I1028 12:16:57.066305  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:16:57.066326  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:16:57.066862  185546 api_server.go:269] stopped: https://192.168.72.156:8443/healthz: Get "https://192.168.72.156:8443/healthz": dial tcp 192.168.72.156:8443: connect: connection refused
	I1028 12:16:57.567124  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.159147  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.159179  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.159193  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.171505  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1028 12:17:00.171530  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1028 12:17:00.566560  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:00.570920  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:00.570947  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.066537  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.071173  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.071205  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:01.566517  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:01.577822  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1028 12:17:01.577851  185546 api_server.go:103] status: https://192.168.72.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1028 12:17:02.066514  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:17:02.071117  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:17:02.078265  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:17:02.078293  185546 api_server.go:131] duration metric: took 5.011981306s to wait for apiserver health ...
	I1028 12:17:02.078302  185546 cni.go:84] Creating CNI manager for ""
	I1028 12:17:02.078308  185546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:17:02.080348  185546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:16:59.267565  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:01.766399  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:16:58.404020  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:58.903743  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.403548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:16:59.903515  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.403423  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:00.903757  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.403620  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:01.903710  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.403932  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.903729  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:02.081626  185546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:17:02.103809  185546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:17:02.135225  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:17:02.152051  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:17:02.152102  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:17:02.152113  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1028 12:17:02.152125  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1028 12:17:02.152133  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1028 12:17:02.152146  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:17:02.152159  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1028 12:17:02.152167  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:17:02.152174  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1028 12:17:02.152183  185546 system_pods.go:74] duration metric: took 16.930389ms to wait for pod list to return data ...
	I1028 12:17:02.152192  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:17:02.157475  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:17:02.157504  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:17:02.157515  185546 node_conditions.go:105] duration metric: took 5.317861ms to run NodePressure ...
	I1028 12:17:02.157548  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1028 12:17:02.476553  185546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482764  185546 kubeadm.go:739] kubelet initialised
	I1028 12:17:02.482789  185546 kubeadm.go:740] duration metric: took 6.205425ms waiting for restarted kubelet to initialise ...
	I1028 12:17:02.482798  185546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:02.487480  185546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.495454  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495482  185546 pod_ready.go:82] duration metric: took 7.976331ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.495495  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.495505  185546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.499904  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499931  185546 pod_ready.go:82] duration metric: took 4.41555ms for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.499941  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "etcd-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.499948  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.504272  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504300  185546 pod_ready.go:82] duration metric: took 4.345522ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.504325  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-apiserver-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.504337  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.538786  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538826  185546 pod_ready.go:82] duration metric: took 34.474629ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.538841  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.538851  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:02.939462  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939490  185546 pod_ready.go:82] duration metric: took 400.627739ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:02.939502  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-proxy-6rc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:02.939511  185546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.339338  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339369  185546 pod_ready.go:82] duration metric: took 399.848996ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.339384  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "kube-scheduler-no-preload-871884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.339394  185546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:03.739585  185546 pod_ready.go:98] node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739640  185546 pod_ready.go:82] duration metric: took 400.235271ms for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:17:03.739655  185546 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-871884" hosting pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.739665  185546 pod_ready.go:39] duration metric: took 1.256859696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:03.739682  185546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:17:03.755064  185546 ops.go:34] apiserver oom_adj: -16
	I1028 12:17:03.755086  185546 kubeadm.go:597] duration metric: took 9.199108841s to restartPrimaryControlPlane
	I1028 12:17:03.755096  185546 kubeadm.go:394] duration metric: took 9.256999682s to StartCluster
	I1028 12:17:03.755111  185546 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.755175  185546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:17:03.757048  185546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:17:03.757327  185546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.156 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:17:03.757425  185546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:17:03.757535  185546 addons.go:69] Setting storage-provisioner=true in profile "no-preload-871884"
	I1028 12:17:03.757563  185546 addons.go:234] Setting addon storage-provisioner=true in "no-preload-871884"
	I1028 12:17:03.757565  185546 config.go:182] Loaded profile config "no-preload-871884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:17:03.757589  185546 addons.go:69] Setting metrics-server=true in profile "no-preload-871884"
	I1028 12:17:03.757617  185546 addons.go:234] Setting addon metrics-server=true in "no-preload-871884"
	I1028 12:17:03.757568  185546 addons.go:69] Setting default-storageclass=true in profile "no-preload-871884"
	W1028 12:17:03.757626  185546 addons.go:243] addon metrics-server should already be in state true
	I1028 12:17:03.757635  185546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-871884"
	W1028 12:17:03.757573  185546 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:17:03.757669  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.757713  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.758051  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758093  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758196  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758233  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.758231  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.758355  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.759378  185546 out.go:177] * Verifying Kubernetes components...
	I1028 12:17:03.761108  185546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:17:03.786180  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1028 12:17:03.786344  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I1028 12:17:03.787005  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787096  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.787644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.787658  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.788034  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.789126  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.789149  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.789333  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.789366  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.790199  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.790591  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.793866  185546 addons.go:234] Setting addon default-storageclass=true in "no-preload-871884"
	W1028 12:17:03.793890  185546 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:17:03.793920  185546 host.go:66] Checking if "no-preload-871884" exists ...
	I1028 12:17:03.794332  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.794384  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.806461  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I1028 12:17:03.806960  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.807572  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I1028 12:17:03.807644  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.807835  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808074  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.808188  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.808349  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.808603  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.808624  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.808993  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.809610  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.809665  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.810531  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.812676  185546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:17:03.813307  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1028 12:17:03.813821  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.814228  185546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:03.814248  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:17:03.814266  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.814350  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.814373  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.814848  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.815284  185546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:17:03.815323  185546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:17:03.817336  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817751  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.817776  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.817889  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.818079  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.818219  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.818357  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.830425  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1028 12:17:03.830940  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.831486  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.831507  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.831905  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.832125  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.834275  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.835260  185546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I1028 12:17:03.835687  185546 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:17:03.836180  185546 main.go:141] libmachine: Using API Version  1
	I1028 12:17:03.836200  185546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:17:03.836527  185546 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:17:03.836604  185546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:17:03.836741  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetState
	I1028 12:17:03.838273  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:17:03.838290  185546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:17:03.838306  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.838508  185546 main.go:141] libmachine: (no-preload-871884) Calling .DriverName
	I1028 12:17:03.839044  185546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:03.839060  185546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:17:03.839080  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHHostname
	I1028 12:17:03.842836  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843272  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.843291  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843461  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.843598  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.843767  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.843774  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.843909  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.844312  185546 main.go:141] libmachine: (no-preload-871884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:ce:7e", ip: ""} in network mk-no-preload-871884: {Iface:virbr4 ExpiryTime:2024-10-28 13:16:28 +0000 UTC Type:0 Mac:52:54:00:d0:ce:7e Iaid: IPaddr:192.168.72.156 Prefix:24 Hostname:no-preload-871884 Clientid:01:52:54:00:d0:ce:7e}
	I1028 12:17:03.844330  185546 main.go:141] libmachine: (no-preload-871884) DBG | domain no-preload-871884 has defined IP address 192.168.72.156 and MAC address 52:54:00:d0:ce:7e in network mk-no-preload-871884
	I1028 12:17:03.845228  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHPort
	I1028 12:17:03.845354  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHKeyPath
	I1028 12:17:03.845474  185546 main.go:141] libmachine: (no-preload-871884) Calling .GetSSHUsername
	I1028 12:17:03.845623  185546 sshutil.go:53] new ssh client: &{IP:192.168.72.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/no-preload-871884/id_rsa Username:docker}
	I1028 12:17:03.981979  185546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:17:04.003932  185546 node_ready.go:35] waiting up to 6m0s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:04.071389  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:17:04.169654  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:17:04.186781  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:17:04.186808  185546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:17:04.252889  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:17:04.252921  185546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:17:04.315140  185546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.315166  185546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:17:04.395995  185546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:17:04.489084  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489122  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489426  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.489445  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489470  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.489481  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.489490  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.489763  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.489781  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:04.497272  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:04.497297  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:04.497647  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:04.497677  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:04.497702  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185405  185546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015712456s)
	I1028 12:17:05.185458  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185469  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.185749  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.185768  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.185778  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.185786  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.186142  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.186160  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.186149  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.294924  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.294953  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295282  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295301  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295319  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295329  185546 main.go:141] libmachine: Making call to close driver server
	I1028 12:17:05.295339  185546 main.go:141] libmachine: (no-preload-871884) Calling .Close
	I1028 12:17:05.295584  185546 main.go:141] libmachine: (no-preload-871884) DBG | Closing plugin on server side
	I1028 12:17:05.295615  185546 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:17:05.295622  185546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:17:05.295641  185546 addons.go:475] Verifying addon metrics-server=true in "no-preload-871884"
	I1028 12:17:05.297689  185546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1028 12:17:02.557465  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:04.557517  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:05.298945  185546 addons.go:510] duration metric: took 1.541528913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1028 12:17:06.008731  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:03.766439  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:06.267839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:03.403696  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:03.904015  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:03.904157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:03.952859  186170 cri.go:89] found id: ""
	I1028 12:17:03.952891  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.952903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:03.952911  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:03.952972  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:03.991366  186170 cri.go:89] found id: ""
	I1028 12:17:03.991395  186170 logs.go:282] 0 containers: []
	W1028 12:17:03.991406  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:03.991414  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:03.991472  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:04.030462  186170 cri.go:89] found id: ""
	I1028 12:17:04.030494  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.030505  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:04.030513  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:04.030577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:04.066765  186170 cri.go:89] found id: ""
	I1028 12:17:04.066797  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.066808  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:04.066829  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:04.066890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:04.113262  186170 cri.go:89] found id: ""
	I1028 12:17:04.113291  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.113321  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:04.113329  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:04.113397  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:04.162767  186170 cri.go:89] found id: ""
	I1028 12:17:04.162804  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.162816  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:04.162832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:04.162906  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:04.209735  186170 cri.go:89] found id: ""
	I1028 12:17:04.209768  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.209780  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:04.209788  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:04.209853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:04.258945  186170 cri.go:89] found id: ""
	I1028 12:17:04.258981  186170 logs.go:282] 0 containers: []
	W1028 12:17:04.258993  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:04.259004  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:04.259031  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:04.314152  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:04.314191  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:04.330109  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:04.330154  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:04.495068  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:04.495096  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:04.495111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:04.576574  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:04.576612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.129008  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:07.149770  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:07.149835  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:07.200603  186170 cri.go:89] found id: ""
	I1028 12:17:07.200636  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.200648  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:07.200656  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:07.200733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:07.242681  186170 cri.go:89] found id: ""
	I1028 12:17:07.242709  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.242717  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:07.242723  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:07.242770  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:07.286826  186170 cri.go:89] found id: ""
	I1028 12:17:07.286860  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.286873  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:07.286881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:07.286943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:07.327730  186170 cri.go:89] found id: ""
	I1028 12:17:07.327765  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.327777  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:07.327787  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:07.327855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:07.369138  186170 cri.go:89] found id: ""
	I1028 12:17:07.369167  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.369178  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:07.369187  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:07.369257  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:07.411640  186170 cri.go:89] found id: ""
	I1028 12:17:07.411678  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.411690  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:07.411697  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:07.411758  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:07.454066  186170 cri.go:89] found id: ""
	I1028 12:17:07.454099  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.454109  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:07.454119  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:07.454180  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:07.489981  186170 cri.go:89] found id: ""
	I1028 12:17:07.490011  186170 logs.go:282] 0 containers: []
	W1028 12:17:07.490020  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:07.490030  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:07.490044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:07.559890  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:07.559916  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:07.559927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:07.641601  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:07.641647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:07.687694  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:07.687732  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:07.739346  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:07.739389  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:06.558978  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:09.058557  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:08.507261  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:10.508790  185546 node_ready.go:53] node "no-preload-871884" has status "Ready":"False"
	I1028 12:17:11.007666  185546 node_ready.go:49] node "no-preload-871884" has status "Ready":"True"
	I1028 12:17:11.007698  185546 node_ready.go:38] duration metric: took 7.003728813s for node "no-preload-871884" to be "Ready" ...
	I1028 12:17:11.007710  185546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:17:11.014677  185546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020020  185546 pod_ready.go:93] pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:11.020042  185546 pod_ready.go:82] duration metric: took 5.339994ms for pod "coredns-7c65d6cfc9-dg2jd" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:11.020053  185546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:08.765053  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.766104  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:10.262069  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:10.277467  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:10.277566  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:10.320331  186170 cri.go:89] found id: ""
	I1028 12:17:10.320366  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.320378  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:10.320387  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:10.320455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:10.357204  186170 cri.go:89] found id: ""
	I1028 12:17:10.357235  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.357252  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:10.357261  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:10.357324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:10.392480  186170 cri.go:89] found id: ""
	I1028 12:17:10.392510  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.392519  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:10.392526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:10.392574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:10.430084  186170 cri.go:89] found id: ""
	I1028 12:17:10.430120  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.430132  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:10.430140  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:10.430207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:10.479689  186170 cri.go:89] found id: ""
	I1028 12:17:10.479717  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.479724  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:10.479730  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:10.479786  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:10.520871  186170 cri.go:89] found id: ""
	I1028 12:17:10.520902  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.520912  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:10.520920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:10.520978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:10.559121  186170 cri.go:89] found id: ""
	I1028 12:17:10.559154  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.559167  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:10.559176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:10.559254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:10.596552  186170 cri.go:89] found id: ""
	I1028 12:17:10.596583  186170 logs.go:282] 0 containers: []
	W1028 12:17:10.596594  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:10.596603  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:10.596615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:10.673014  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:10.673037  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:10.673055  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:10.762942  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:10.762982  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:10.805866  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:10.805901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:10.858861  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:10.858895  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:11.556955  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.560411  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.027402  185546 pod_ready.go:103] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:14.026501  185546 pod_ready.go:93] pod "etcd-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.026537  185546 pod_ready.go:82] duration metric: took 3.006475793s for pod "etcd-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.026552  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036355  185546 pod_ready.go:93] pod "kube-apiserver-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.036379  185546 pod_ready.go:82] duration metric: took 9.819102ms for pod "kube-apiserver-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.036391  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042711  185546 pod_ready.go:93] pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.042734  185546 pod_ready.go:82] duration metric: took 6.336523ms for pod "kube-controller-manager-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.042745  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047387  185546 pod_ready.go:93] pod "kube-proxy-6rc4l" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.047409  185546 pod_ready.go:82] duration metric: took 4.657388ms for pod "kube-proxy-6rc4l" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.047422  185546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208217  185546 pod_ready.go:93] pod "kube-scheduler-no-preload-871884" in "kube-system" namespace has status "Ready":"True"
	I1028 12:17:14.208243  185546 pod_ready.go:82] duration metric: took 160.813834ms for pod "kube-scheduler-no-preload-871884" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:14.208254  185546 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	I1028 12:17:16.214834  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.268493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:15.271377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:13.373936  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:13.387904  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:13.387969  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:13.435502  186170 cri.go:89] found id: ""
	I1028 12:17:13.435528  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.435536  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:13.435547  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:13.435593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:13.475592  186170 cri.go:89] found id: ""
	I1028 12:17:13.475621  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.475631  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:13.475639  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:13.475703  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:13.524964  186170 cri.go:89] found id: ""
	I1028 12:17:13.524993  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.525002  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:13.525010  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:13.525071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:13.570408  186170 cri.go:89] found id: ""
	I1028 12:17:13.570437  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.570446  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:13.570455  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:13.570515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:13.620981  186170 cri.go:89] found id: ""
	I1028 12:17:13.621008  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.621016  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:13.621022  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:13.621071  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:13.657345  186170 cri.go:89] found id: ""
	I1028 12:17:13.657375  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.657385  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:13.657393  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:13.657455  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:13.695975  186170 cri.go:89] found id: ""
	I1028 12:17:13.695998  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.696005  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:13.696012  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:13.696059  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:13.744055  186170 cri.go:89] found id: ""
	I1028 12:17:13.744093  186170 logs.go:282] 0 containers: []
	W1028 12:17:13.744112  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:13.744128  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:13.744143  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:13.798898  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:13.798936  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:13.813630  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:13.813676  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:13.886699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:13.886733  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:13.886750  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:13.972377  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:13.972419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.518525  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:16.532512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:16.532594  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:16.573345  186170 cri.go:89] found id: ""
	I1028 12:17:16.573370  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.573377  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:16.573384  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:16.573449  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:16.611130  186170 cri.go:89] found id: ""
	I1028 12:17:16.611159  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.611170  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:16.611179  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:16.611242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:16.646155  186170 cri.go:89] found id: ""
	I1028 12:17:16.646180  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.646187  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:16.646194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:16.646253  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:16.680731  186170 cri.go:89] found id: ""
	I1028 12:17:16.680761  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.680770  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:16.680776  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:16.680836  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:16.725323  186170 cri.go:89] found id: ""
	I1028 12:17:16.725351  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.725361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:16.725370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:16.725429  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:16.761810  186170 cri.go:89] found id: ""
	I1028 12:17:16.761839  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.761850  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:16.761859  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:16.761919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:16.797737  186170 cri.go:89] found id: ""
	I1028 12:17:16.797771  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.797783  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:16.797791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:16.797854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:16.834045  186170 cri.go:89] found id: ""
	I1028 12:17:16.834077  186170 logs.go:282] 0 containers: []
	W1028 12:17:16.834087  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:16.834098  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:16.834111  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:16.885174  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:16.885211  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:16.900281  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:16.900312  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:16.973761  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:16.973784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:16.973799  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:17.058711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:17.058747  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:16.056296  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.557898  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:18.215767  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:20.219613  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:17.764493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.766909  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:21.769560  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:19.605867  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:19.620832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:19.620896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:19.660722  186170 cri.go:89] found id: ""
	I1028 12:17:19.660747  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.660757  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:19.660765  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:19.660825  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:19.698537  186170 cri.go:89] found id: ""
	I1028 12:17:19.698571  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.698581  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:19.698590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:19.698639  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:19.736911  186170 cri.go:89] found id: ""
	I1028 12:17:19.736945  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.736956  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:19.736972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:19.737041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:19.779343  186170 cri.go:89] found id: ""
	I1028 12:17:19.779371  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.779379  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:19.779384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:19.779432  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:19.824749  186170 cri.go:89] found id: ""
	I1028 12:17:19.824778  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.824788  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:19.824796  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:19.824861  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:19.862810  186170 cri.go:89] found id: ""
	I1028 12:17:19.862850  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.862862  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:19.862871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:19.862935  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:19.910552  186170 cri.go:89] found id: ""
	I1028 12:17:19.910583  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.910592  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:19.910601  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:19.910663  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:19.956806  186170 cri.go:89] found id: ""
	I1028 12:17:19.956838  186170 logs.go:282] 0 containers: []
	W1028 12:17:19.956850  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:19.956862  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:19.956879  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:20.018142  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:20.018187  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:20.035656  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:20.035696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:20.112484  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:20.112515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:20.112535  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:20.203034  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:20.203079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:22.749198  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:22.762993  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:22.763073  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:22.808879  186170 cri.go:89] found id: ""
	I1028 12:17:22.808923  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.808934  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:22.808943  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:22.809013  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:22.845367  186170 cri.go:89] found id: ""
	I1028 12:17:22.845393  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.845401  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:22.845407  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:22.845457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:22.884841  186170 cri.go:89] found id: ""
	I1028 12:17:22.884870  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.884877  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:22.884884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:22.884936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:22.921830  186170 cri.go:89] found id: ""
	I1028 12:17:22.921857  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.921865  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:22.921871  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:22.921917  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:22.958981  186170 cri.go:89] found id: ""
	I1028 12:17:22.959016  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.959028  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:22.959038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:22.959138  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:22.993987  186170 cri.go:89] found id: ""
	I1028 12:17:22.994022  186170 logs.go:282] 0 containers: []
	W1028 12:17:22.994033  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:22.994041  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:22.994112  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:23.036235  186170 cri.go:89] found id: ""
	I1028 12:17:23.036262  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.036270  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:23.036276  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:23.036326  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:23.084209  186170 cri.go:89] found id: ""
	I1028 12:17:23.084237  186170 logs.go:282] 0 containers: []
	W1028 12:17:23.084248  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:23.084260  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:23.084274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:23.168684  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:23.168725  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:23.211205  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:23.211246  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:23.269140  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:23.269174  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:23.283588  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:23.283620  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:17:21.057114  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:23.058470  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:25.556210  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:22.714692  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.717301  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:24.269572  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:26.765467  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:17:23.363349  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:25.864503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:25.881420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:25.881505  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:25.920194  186170 cri.go:89] found id: ""
	I1028 12:17:25.920230  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.920242  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:25.920250  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:25.920319  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:25.982898  186170 cri.go:89] found id: ""
	I1028 12:17:25.982940  186170 logs.go:282] 0 containers: []
	W1028 12:17:25.982952  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:25.982960  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:25.983026  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:26.042807  186170 cri.go:89] found id: ""
	I1028 12:17:26.042848  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.042856  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:26.042863  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:26.042914  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:26.081683  186170 cri.go:89] found id: ""
	I1028 12:17:26.081717  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.081729  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:26.081738  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:26.081811  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:26.118390  186170 cri.go:89] found id: ""
	I1028 12:17:26.118419  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.118426  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:26.118433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:26.118482  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:26.154065  186170 cri.go:89] found id: ""
	I1028 12:17:26.154100  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.154108  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:26.154114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:26.154168  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:26.195602  186170 cri.go:89] found id: ""
	I1028 12:17:26.195634  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.195645  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:26.195656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:26.195711  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:26.237315  186170 cri.go:89] found id: ""
	I1028 12:17:26.237350  186170 logs.go:282] 0 containers: []
	W1028 12:17:26.237361  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:26.237371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:26.237383  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:26.319079  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:26.319121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:26.360967  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:26.360996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:26.414689  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:26.414728  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:26.429733  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:26.429763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:26.503297  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:28.056563  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:30.556711  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:27.215356  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.216505  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.267239  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.765267  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:29.003479  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:29.017833  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:29.017908  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:29.067759  186170 cri.go:89] found id: ""
	I1028 12:17:29.067785  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.067793  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:29.067799  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:29.067856  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:29.114369  186170 cri.go:89] found id: ""
	I1028 12:17:29.114401  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.114411  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:29.114419  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:29.114511  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:29.154640  186170 cri.go:89] found id: ""
	I1028 12:17:29.154672  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.154683  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:29.154692  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:29.154749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:29.194296  186170 cri.go:89] found id: ""
	I1028 12:17:29.194331  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.194341  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:29.194349  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:29.194413  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:29.239107  186170 cri.go:89] found id: ""
	I1028 12:17:29.239133  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.239146  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:29.239152  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:29.239199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:29.274900  186170 cri.go:89] found id: ""
	I1028 12:17:29.274928  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.274937  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:29.274946  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:29.275010  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:29.310307  186170 cri.go:89] found id: ""
	I1028 12:17:29.310336  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.310346  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:29.310354  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:29.310421  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:29.345285  186170 cri.go:89] found id: ""
	I1028 12:17:29.345313  186170 logs.go:282] 0 containers: []
	W1028 12:17:29.345351  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:29.345363  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:29.345379  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:29.402044  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:29.402094  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:29.417578  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:29.417615  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:29.497733  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:29.497757  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:29.497773  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:29.587148  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:29.587202  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:32.132697  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:32.146675  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:32.146746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:32.188640  186170 cri.go:89] found id: ""
	I1028 12:17:32.188669  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.188681  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:32.188690  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:32.188749  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:32.228690  186170 cri.go:89] found id: ""
	I1028 12:17:32.228726  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.228738  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:32.228745  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:32.228812  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:32.269133  186170 cri.go:89] found id: ""
	I1028 12:17:32.269180  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.269191  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:32.269200  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:32.269279  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:32.319757  186170 cri.go:89] found id: ""
	I1028 12:17:32.319796  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.319809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:32.319817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:32.319888  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:32.360072  186170 cri.go:89] found id: ""
	I1028 12:17:32.360104  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.360116  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:32.360125  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:32.360192  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:32.413256  186170 cri.go:89] found id: ""
	I1028 12:17:32.413286  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.413297  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:32.413319  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:32.413371  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:32.454505  186170 cri.go:89] found id: ""
	I1028 12:17:32.454536  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.454547  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:32.454555  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:32.454621  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:32.495091  186170 cri.go:89] found id: ""
	I1028 12:17:32.495129  186170 logs.go:282] 0 containers: []
	W1028 12:17:32.495138  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:32.495148  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:32.495163  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:32.548669  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:32.548712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:32.566003  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:32.566044  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:32.642079  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:32.642104  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:32.642117  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:32.727317  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:32.727361  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:33.055776  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.056525  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:31.714959  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:33.715292  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.715824  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:34.267155  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:36.765199  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:35.278752  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:35.292256  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:35.292344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:35.328420  186170 cri.go:89] found id: ""
	I1028 12:17:35.328447  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.328457  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:35.328465  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:35.328528  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:35.365120  186170 cri.go:89] found id: ""
	I1028 12:17:35.365153  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.365162  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:35.365170  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:35.365236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:35.402057  186170 cri.go:89] found id: ""
	I1028 12:17:35.402093  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.402105  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:35.402114  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:35.402179  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:35.436496  186170 cri.go:89] found id: ""
	I1028 12:17:35.436523  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.436531  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:35.436536  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:35.436593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:35.473369  186170 cri.go:89] found id: ""
	I1028 12:17:35.473399  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.473409  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:35.473416  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:35.473480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:35.511258  186170 cri.go:89] found id: ""
	I1028 12:17:35.511293  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.511305  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:35.511337  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:35.511403  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:35.548430  186170 cri.go:89] found id: ""
	I1028 12:17:35.548461  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.548472  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:35.548479  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:35.548526  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:35.584324  186170 cri.go:89] found id: ""
	I1028 12:17:35.584357  186170 logs.go:282] 0 containers: []
	W1028 12:17:35.584369  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:35.584379  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:35.584394  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:35.598813  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:35.598855  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:35.676911  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:35.676935  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:35.676948  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:35.757166  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:35.757205  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:35.801381  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:35.801411  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:37.557428  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.057039  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:37.715996  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:40.213916  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.765841  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:41.267477  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:38.356346  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:38.370346  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:38.370436  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:38.413623  186170 cri.go:89] found id: ""
	I1028 12:17:38.413653  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.413664  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:38.413671  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:38.413741  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:38.450656  186170 cri.go:89] found id: ""
	I1028 12:17:38.450682  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.450691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:38.450697  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:38.450754  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:38.491050  186170 cri.go:89] found id: ""
	I1028 12:17:38.491083  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.491090  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:38.491096  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:38.491146  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:38.529708  186170 cri.go:89] found id: ""
	I1028 12:17:38.529735  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.529743  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:38.529749  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:38.529808  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:38.566632  186170 cri.go:89] found id: ""
	I1028 12:17:38.566659  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.566673  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:38.566681  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:38.566746  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:38.602323  186170 cri.go:89] found id: ""
	I1028 12:17:38.602362  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.602374  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:38.602382  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:38.602444  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:38.646462  186170 cri.go:89] found id: ""
	I1028 12:17:38.646487  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.646494  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:38.646499  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:38.646560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:38.681803  186170 cri.go:89] found id: ""
	I1028 12:17:38.681830  186170 logs.go:282] 0 containers: []
	W1028 12:17:38.681837  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:38.681847  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:38.681858  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:38.697360  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:38.697387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:38.769502  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:38.769549  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:38.769566  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:38.852029  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:38.852068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:38.895585  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:38.895621  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.450844  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:41.464665  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:41.464731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:41.507199  186170 cri.go:89] found id: ""
	I1028 12:17:41.507265  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.507274  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:41.507280  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:41.507351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:41.550126  186170 cri.go:89] found id: ""
	I1028 12:17:41.550158  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.550168  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:41.550176  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:41.550237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:41.588914  186170 cri.go:89] found id: ""
	I1028 12:17:41.588942  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.588953  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:41.588961  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:41.589027  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:41.625255  186170 cri.go:89] found id: ""
	I1028 12:17:41.625285  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.625297  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:41.625315  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:41.625386  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:41.663786  186170 cri.go:89] found id: ""
	I1028 12:17:41.663816  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.663833  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:41.663844  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:41.663911  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:41.698330  186170 cri.go:89] found id: ""
	I1028 12:17:41.698357  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.698364  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:41.698371  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:41.698424  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:41.734658  186170 cri.go:89] found id: ""
	I1028 12:17:41.734688  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.734699  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:41.734707  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:41.734776  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:41.773227  186170 cri.go:89] found id: ""
	I1028 12:17:41.773262  186170 logs.go:282] 0 containers: []
	W1028 12:17:41.773273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:41.773286  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:41.773301  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:41.815830  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:41.815866  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:41.866789  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:41.866832  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:41.882088  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:41.882121  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:41.953895  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:41.953917  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:41.953933  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:42.556504  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.557351  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:42.216159  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.216286  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:43.764776  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.265654  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:44.538655  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:44.551644  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:44.551724  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:44.589370  186170 cri.go:89] found id: ""
	I1028 12:17:44.589400  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.589407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:44.589413  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:44.589473  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:44.625143  186170 cri.go:89] found id: ""
	I1028 12:17:44.625175  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.625185  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:44.625198  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:44.625283  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:44.664579  186170 cri.go:89] found id: ""
	I1028 12:17:44.664609  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.664620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:44.664628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:44.664692  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:44.700009  186170 cri.go:89] found id: ""
	I1028 12:17:44.700038  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.700046  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:44.700053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:44.700119  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:44.736283  186170 cri.go:89] found id: ""
	I1028 12:17:44.736316  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.736323  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:44.736331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:44.736393  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:44.772214  186170 cri.go:89] found id: ""
	I1028 12:17:44.772249  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.772261  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:44.772270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:44.772324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:44.808152  186170 cri.go:89] found id: ""
	I1028 12:17:44.808187  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.808198  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:44.808206  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:44.808276  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:44.844208  186170 cri.go:89] found id: ""
	I1028 12:17:44.844238  186170 logs.go:282] 0 containers: []
	W1028 12:17:44.844251  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:44.844264  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:44.844286  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:44.925988  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:44.926029  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:44.964936  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:44.964969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:45.015630  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:45.015675  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:45.030537  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:45.030571  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:45.103861  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:47.604548  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:47.618858  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:47.618941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:47.663237  186170 cri.go:89] found id: ""
	I1028 12:17:47.663267  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.663278  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:47.663285  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:47.663350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:47.703207  186170 cri.go:89] found id: ""
	I1028 12:17:47.703236  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.703244  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:47.703250  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:47.703322  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:47.743050  186170 cri.go:89] found id: ""
	I1028 12:17:47.743081  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.743091  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:47.743099  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:47.743161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:47.789956  186170 cri.go:89] found id: ""
	I1028 12:17:47.789982  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.789989  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:47.789996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:47.790055  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:47.833134  186170 cri.go:89] found id: ""
	I1028 12:17:47.833165  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.833177  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:47.833184  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:47.833241  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:47.870881  186170 cri.go:89] found id: ""
	I1028 12:17:47.870905  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.870916  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:47.870925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:47.870992  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:47.908121  186170 cri.go:89] found id: ""
	I1028 12:17:47.908155  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.908165  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:47.908173  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:47.908236  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:47.946835  186170 cri.go:89] found id: ""
	I1028 12:17:47.946871  186170 logs.go:282] 0 containers: []
	W1028 12:17:47.946884  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:47.946896  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:47.946914  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:47.999276  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:47.999316  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:48.016268  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:48.016306  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:48.099928  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:48.099959  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:48.099976  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:48.180885  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:48.180937  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:46.565643  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.057078  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:46.716667  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:49.216308  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:48.267160  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.764737  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:50.727685  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:50.741737  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:50.741820  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:50.782030  186170 cri.go:89] found id: ""
	I1028 12:17:50.782060  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.782081  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:50.782090  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:50.782157  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:50.817423  186170 cri.go:89] found id: ""
	I1028 12:17:50.817453  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.817464  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:50.817471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:50.817523  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:50.857203  186170 cri.go:89] found id: ""
	I1028 12:17:50.857232  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.857242  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:50.857249  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:50.857324  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:50.894196  186170 cri.go:89] found id: ""
	I1028 12:17:50.894236  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.894248  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:50.894259  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:50.894325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:50.930014  186170 cri.go:89] found id: ""
	I1028 12:17:50.930046  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.930056  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:50.930064  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:50.930128  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:50.967742  186170 cri.go:89] found id: ""
	I1028 12:17:50.967774  186170 logs.go:282] 0 containers: []
	W1028 12:17:50.967785  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:50.967799  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:50.967857  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:51.013232  186170 cri.go:89] found id: ""
	I1028 12:17:51.013258  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.013269  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:51.013281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:51.013341  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:51.052871  186170 cri.go:89] found id: ""
	I1028 12:17:51.052900  186170 logs.go:282] 0 containers: []
	W1028 12:17:51.052912  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:51.052923  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:51.052943  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:51.106536  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:51.106579  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:51.121628  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:51.121670  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:51.200215  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:51.200249  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:51.200266  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:51.291948  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:51.291996  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:51.058399  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.556450  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:55.557043  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:51.715736  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.215689  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:52.764839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:54.766020  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:57.269346  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:53.837066  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:53.851660  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:53.851747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:53.888799  186170 cri.go:89] found id: ""
	I1028 12:17:53.888835  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.888846  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:53.888855  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:53.888919  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:53.923838  186170 cri.go:89] found id: ""
	I1028 12:17:53.923867  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.923875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:53.923880  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:53.923940  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:53.960264  186170 cri.go:89] found id: ""
	I1028 12:17:53.960293  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.960302  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:53.960307  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:53.960356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:53.995913  186170 cri.go:89] found id: ""
	I1028 12:17:53.995943  186170 logs.go:282] 0 containers: []
	W1028 12:17:53.995952  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:53.995958  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:53.996009  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:54.032127  186170 cri.go:89] found id: ""
	I1028 12:17:54.032155  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.032163  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:54.032169  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:54.032219  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:54.070230  186170 cri.go:89] found id: ""
	I1028 12:17:54.070267  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.070279  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:54.070288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:54.070346  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:54.104992  186170 cri.go:89] found id: ""
	I1028 12:17:54.105024  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.105032  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:54.105038  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:54.105099  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:54.140071  186170 cri.go:89] found id: ""
	I1028 12:17:54.140102  186170 logs.go:282] 0 containers: []
	W1028 12:17:54.140113  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:54.140124  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:54.140137  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:54.195304  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:54.195353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:54.210315  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:54.210355  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:54.301247  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:54.301279  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:54.301300  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:54.382818  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:54.382876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:56.928740  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:17:56.942264  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:17:56.942334  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:17:56.979445  186170 cri.go:89] found id: ""
	I1028 12:17:56.979494  186170 logs.go:282] 0 containers: []
	W1028 12:17:56.979503  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:17:56.979510  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:17:56.979580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:17:57.017777  186170 cri.go:89] found id: ""
	I1028 12:17:57.017817  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.017831  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:17:57.017840  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:17:57.017954  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:17:57.058842  186170 cri.go:89] found id: ""
	I1028 12:17:57.058873  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.058881  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:17:57.058887  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:17:57.058941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:17:57.096365  186170 cri.go:89] found id: ""
	I1028 12:17:57.096393  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.096401  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:17:57.096408  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:17:57.096456  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:17:57.135395  186170 cri.go:89] found id: ""
	I1028 12:17:57.135425  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.135433  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:17:57.135440  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:17:57.135502  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:17:57.173426  186170 cri.go:89] found id: ""
	I1028 12:17:57.173455  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.173466  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:17:57.173473  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:17:57.173536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:17:57.209969  186170 cri.go:89] found id: ""
	I1028 12:17:57.210004  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.210015  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:17:57.210026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:17:57.210118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:17:57.252141  186170 cri.go:89] found id: ""
	I1028 12:17:57.252172  186170 logs.go:282] 0 containers: []
	W1028 12:17:57.252182  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:17:57.252192  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:17:57.252206  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:17:57.304533  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:17:57.304576  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:17:57.319775  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:17:57.319807  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:17:57.385156  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:17:57.385186  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:17:57.385198  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:17:57.464777  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:17:57.464818  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:17:57.557519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.057963  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:56.715168  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:58.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.215445  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:17:59.271418  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:01.766158  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:00.005073  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:00.033478  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:00.033580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:00.071437  186170 cri.go:89] found id: ""
	I1028 12:18:00.071462  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.071470  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:00.071475  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:00.071524  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:00.108147  186170 cri.go:89] found id: ""
	I1028 12:18:00.108183  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.108195  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:00.108204  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:00.108262  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:00.146129  186170 cri.go:89] found id: ""
	I1028 12:18:00.146157  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.146168  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:00.146176  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:00.146237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:00.184211  186170 cri.go:89] found id: ""
	I1028 12:18:00.184239  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.184254  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:00.184262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:00.184325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:00.221949  186170 cri.go:89] found id: ""
	I1028 12:18:00.221980  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.221988  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:00.221995  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:00.222049  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:00.264173  186170 cri.go:89] found id: ""
	I1028 12:18:00.264203  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.264213  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:00.264230  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:00.264287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:00.302024  186170 cri.go:89] found id: ""
	I1028 12:18:00.302048  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.302057  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:00.302065  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:00.302134  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:00.340500  186170 cri.go:89] found id: ""
	I1028 12:18:00.340529  186170 logs.go:282] 0 containers: []
	W1028 12:18:00.340542  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:00.340553  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:00.340574  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:00.392375  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:00.392419  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:00.409823  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:00.409854  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:00.489965  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:00.489988  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:00.490000  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:00.574510  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:00.574553  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.116821  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:03.131120  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:03.131188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:03.168283  186170 cri.go:89] found id: ""
	I1028 12:18:03.168320  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.168331  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:03.168340  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:03.168404  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:03.210877  186170 cri.go:89] found id: ""
	I1028 12:18:03.210902  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.210910  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:03.210922  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:03.210981  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:03.248316  186170 cri.go:89] found id: ""
	I1028 12:18:03.248351  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.248362  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:03.248370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:03.248437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:03.287624  186170 cri.go:89] found id: ""
	I1028 12:18:03.287653  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.287663  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:03.287674  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:03.287738  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:02.556743  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.055348  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.217504  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:05.715462  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.768899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:06.266111  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:03.323235  186170 cri.go:89] found id: ""
	I1028 12:18:03.323268  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.323281  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:03.323289  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:03.323350  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:03.359449  186170 cri.go:89] found id: ""
	I1028 12:18:03.359481  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.359489  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:03.359496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:03.359544  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:03.397656  186170 cri.go:89] found id: ""
	I1028 12:18:03.397682  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.397690  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:03.397696  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:03.397756  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:03.436269  186170 cri.go:89] found id: ""
	I1028 12:18:03.436312  186170 logs.go:282] 0 containers: []
	W1028 12:18:03.436325  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:03.436337  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:03.436353  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:03.484677  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:03.484721  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:03.538826  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:03.538867  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:03.554032  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:03.554067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:03.630222  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:03.630256  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:03.630274  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.208709  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:06.223650  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:06.223731  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:06.264302  186170 cri.go:89] found id: ""
	I1028 12:18:06.264339  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.264348  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:06.264356  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:06.264415  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:06.306168  186170 cri.go:89] found id: ""
	I1028 12:18:06.306204  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.306212  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:06.306218  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:06.306306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:06.344883  186170 cri.go:89] found id: ""
	I1028 12:18:06.344909  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.344920  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:06.344927  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:06.344978  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:06.382601  186170 cri.go:89] found id: ""
	I1028 12:18:06.382630  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.382640  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:06.382648  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:06.382720  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:06.428844  186170 cri.go:89] found id: ""
	I1028 12:18:06.428871  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.428878  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:06.428884  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:06.428936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:06.480468  186170 cri.go:89] found id: ""
	I1028 12:18:06.480497  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.480508  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:06.480516  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:06.480581  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:06.525838  186170 cri.go:89] found id: ""
	I1028 12:18:06.525869  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.525882  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:06.525890  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:06.525950  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:06.572122  186170 cri.go:89] found id: ""
	I1028 12:18:06.572147  186170 logs.go:282] 0 containers: []
	W1028 12:18:06.572154  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:06.572164  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:06.572176  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:06.642898  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:06.642925  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:06.642941  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:06.727353  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:06.727399  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:06.770170  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:06.770208  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:06.825593  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:06.825635  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:07.055842  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.057870  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:07.716593  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.215089  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:08.266990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:10.765441  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:09.340955  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:09.355706  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:09.355783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:09.390008  186170 cri.go:89] found id: ""
	I1028 12:18:09.390039  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.390050  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:09.390057  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:09.390123  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:09.428209  186170 cri.go:89] found id: ""
	I1028 12:18:09.428247  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.428259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:09.428267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:09.428327  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:09.466499  186170 cri.go:89] found id: ""
	I1028 12:18:09.466524  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.466531  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:09.466538  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:09.466596  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:09.505384  186170 cri.go:89] found id: ""
	I1028 12:18:09.505418  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.505426  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:09.505433  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:09.505492  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:09.543113  186170 cri.go:89] found id: ""
	I1028 12:18:09.543145  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.543154  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:09.543160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:09.543225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:09.581402  186170 cri.go:89] found id: ""
	I1028 12:18:09.581436  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.581446  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:09.581459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:09.581542  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:09.620586  186170 cri.go:89] found id: ""
	I1028 12:18:09.620616  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.620623  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:09.620629  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:09.620682  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:09.657220  186170 cri.go:89] found id: ""
	I1028 12:18:09.657246  186170 logs.go:282] 0 containers: []
	W1028 12:18:09.657253  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:09.657261  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:09.657272  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:09.709636  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:09.709671  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:09.724476  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:09.724510  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:09.800194  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:09.800226  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:09.800242  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:09.882217  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:09.882254  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:12.425609  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:12.443417  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:12.443480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:12.509173  186170 cri.go:89] found id: ""
	I1028 12:18:12.509202  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.509211  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:12.509217  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:12.509287  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:12.546564  186170 cri.go:89] found id: ""
	I1028 12:18:12.546595  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.546605  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:12.546612  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:12.546676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:12.584949  186170 cri.go:89] found id: ""
	I1028 12:18:12.584982  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.584990  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:12.584996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:12.585045  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:12.624513  186170 cri.go:89] found id: ""
	I1028 12:18:12.624543  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.624554  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:12.624562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:12.624624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:12.661811  186170 cri.go:89] found id: ""
	I1028 12:18:12.661854  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.661867  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:12.661876  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:12.661936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:12.700037  186170 cri.go:89] found id: ""
	I1028 12:18:12.700072  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.700080  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:12.700086  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:12.700149  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:12.740604  186170 cri.go:89] found id: ""
	I1028 12:18:12.740629  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.740637  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:12.740643  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:12.740696  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:12.779296  186170 cri.go:89] found id: ""
	I1028 12:18:12.779323  186170 logs.go:282] 0 containers: []
	W1028 12:18:12.779333  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:12.779344  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:12.779358  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:12.830286  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:12.830330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:12.845423  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:12.845449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:12.923961  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:12.924003  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:12.924018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:13.003949  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:13.003990  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:11.556422  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.056678  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.216340  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.715086  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:12.766493  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:14.766870  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.264729  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:15.552001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:15.565834  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:15.565899  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:15.598794  186170 cri.go:89] found id: ""
	I1028 12:18:15.598819  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.598828  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:15.598836  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:15.598904  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:15.637029  186170 cri.go:89] found id: ""
	I1028 12:18:15.637062  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.637073  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:15.637082  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:15.637148  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:15.675461  186170 cri.go:89] found id: ""
	I1028 12:18:15.675495  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.675503  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:15.675510  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:15.675577  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:15.709169  186170 cri.go:89] found id: ""
	I1028 12:18:15.709198  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.709210  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:15.709217  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:15.709288  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:15.747687  186170 cri.go:89] found id: ""
	I1028 12:18:15.747715  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.747725  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:15.747740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:15.747802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:15.785554  186170 cri.go:89] found id: ""
	I1028 12:18:15.785587  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.785598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:15.785607  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:15.785674  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:15.828713  186170 cri.go:89] found id: ""
	I1028 12:18:15.828749  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.828762  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:15.828771  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:15.828834  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:15.864708  186170 cri.go:89] found id: ""
	I1028 12:18:15.864745  186170 logs.go:282] 0 containers: []
	W1028 12:18:15.864757  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:15.864767  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:15.864788  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:15.941064  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:15.941090  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:15.941102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:16.031546  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:16.031586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:16.074297  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:16.074343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:16.132758  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:16.132803  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:16.057216  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.555816  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:20.556292  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:17.215803  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.215927  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:19.265178  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.268144  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:18.649877  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:18.663420  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:18.663480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:18.698967  186170 cri.go:89] found id: ""
	I1028 12:18:18.698999  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.699011  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:18.699020  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:18.699088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:18.738095  186170 cri.go:89] found id: ""
	I1028 12:18:18.738128  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.738140  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:18.738149  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:18.738231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:18.780039  186170 cri.go:89] found id: ""
	I1028 12:18:18.780066  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.780074  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:18.780080  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:18.780131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:18.820458  186170 cri.go:89] found id: ""
	I1028 12:18:18.820492  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.820501  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:18.820512  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:18.820569  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:18.860856  186170 cri.go:89] found id: ""
	I1028 12:18:18.860887  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.860896  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:18.860903  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:18.860965  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:18.900435  186170 cri.go:89] found id: ""
	I1028 12:18:18.900467  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.900478  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:18.900486  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:18.900547  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:18.938468  186170 cri.go:89] found id: ""
	I1028 12:18:18.938499  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.938508  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:18.938515  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:18.938570  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:18.975389  186170 cri.go:89] found id: ""
	I1028 12:18:18.975429  186170 logs.go:282] 0 containers: []
	W1028 12:18:18.975440  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:18.975451  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:18.975466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:19.028306  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:19.028354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:19.043348  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:19.043382  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:19.117653  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:19.117721  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:19.117737  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:19.204218  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:19.204256  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:21.749564  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:21.768060  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:21.768131  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:21.805414  186170 cri.go:89] found id: ""
	I1028 12:18:21.805443  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.805454  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:21.805462  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:21.805541  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:21.842649  186170 cri.go:89] found id: ""
	I1028 12:18:21.842681  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.842691  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:21.842699  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:21.842767  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:21.883241  186170 cri.go:89] found id: ""
	I1028 12:18:21.883269  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.883279  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:21.883288  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:21.883351  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:21.926358  186170 cri.go:89] found id: ""
	I1028 12:18:21.926386  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.926394  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:21.926401  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:21.926453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:21.964671  186170 cri.go:89] found id: ""
	I1028 12:18:21.964705  186170 logs.go:282] 0 containers: []
	W1028 12:18:21.964717  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:21.964726  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:21.964794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:22.019111  186170 cri.go:89] found id: ""
	I1028 12:18:22.019144  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.019154  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:22.019163  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:22.019223  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:22.057484  186170 cri.go:89] found id: ""
	I1028 12:18:22.057511  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.057518  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:22.057547  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:22.057606  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:22.096908  186170 cri.go:89] found id: ""
	I1028 12:18:22.096931  186170 logs.go:282] 0 containers: []
	W1028 12:18:22.096938  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:22.096947  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:22.096962  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:22.180348  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:22.180386  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:22.224772  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:22.224808  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:22.277686  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:22.277726  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:22.293300  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:22.293330  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:22.369990  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:22.556987  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.057115  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:21.715576  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.715814  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:25.716043  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:23.767435  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:26.269805  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:24.870290  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:24.887030  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:24.887090  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:24.927592  186170 cri.go:89] found id: ""
	I1028 12:18:24.927620  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.927628  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:24.927635  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:24.927700  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:24.969025  186170 cri.go:89] found id: ""
	I1028 12:18:24.969059  186170 logs.go:282] 0 containers: []
	W1028 12:18:24.969070  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:24.969077  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:24.969142  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:25.005439  186170 cri.go:89] found id: ""
	I1028 12:18:25.005476  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.005488  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:25.005496  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:25.005573  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:25.046612  186170 cri.go:89] found id: ""
	I1028 12:18:25.046650  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.046659  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:25.046669  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:25.046733  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:25.083162  186170 cri.go:89] found id: ""
	I1028 12:18:25.083186  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.083200  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:25.083209  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:25.083270  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:25.119277  186170 cri.go:89] found id: ""
	I1028 12:18:25.119322  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.119333  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:25.119341  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:25.119409  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:25.160875  186170 cri.go:89] found id: ""
	I1028 12:18:25.160906  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.160917  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:25.160925  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:25.160987  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:25.194958  186170 cri.go:89] found id: ""
	I1028 12:18:25.194993  186170 logs.go:282] 0 containers: []
	W1028 12:18:25.195003  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:25.195016  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:25.195032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:25.248571  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:25.248612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:25.264844  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:25.264876  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:25.341487  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:25.341517  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:25.341552  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:25.419543  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:25.419586  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:27.963358  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:27.977449  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:27.977509  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:28.013922  186170 cri.go:89] found id: ""
	I1028 12:18:28.013955  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.013963  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:28.013969  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:28.014050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:28.054628  186170 cri.go:89] found id: ""
	I1028 12:18:28.054658  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.054666  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:28.054671  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:28.054719  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:28.094289  186170 cri.go:89] found id: ""
	I1028 12:18:28.094315  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.094323  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:28.094330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:28.094390  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:28.131949  186170 cri.go:89] found id: ""
	I1028 12:18:28.131998  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.132011  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:28.132019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:28.132082  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:28.170428  186170 cri.go:89] found id: ""
	I1028 12:18:28.170461  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.170474  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:28.170483  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:28.170550  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:28.204953  186170 cri.go:89] found id: ""
	I1028 12:18:28.204980  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.204987  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:28.204994  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:28.205041  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:28.247002  186170 cri.go:89] found id: ""
	I1028 12:18:28.247035  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.247044  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:28.247052  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:28.247122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:28.286700  186170 cri.go:89] found id: ""
	I1028 12:18:28.286730  186170 logs.go:282] 0 containers: []
	W1028 12:18:28.286739  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:28.286747  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:28.286762  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:27.556197  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.057036  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.216535  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:30.715902  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.765730  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:31.267947  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:28.339162  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:28.339201  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:28.353667  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:28.353696  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:28.426762  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:28.426784  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:28.426800  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:28.511192  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:28.511232  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:31.054503  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:31.069105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:31.069195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:31.112198  186170 cri.go:89] found id: ""
	I1028 12:18:31.112228  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.112237  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:31.112243  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:31.112306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:31.151487  186170 cri.go:89] found id: ""
	I1028 12:18:31.151522  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.151535  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:31.151544  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:31.151605  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:31.189604  186170 cri.go:89] found id: ""
	I1028 12:18:31.189636  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.189645  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:31.189651  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:31.189712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:31.231683  186170 cri.go:89] found id: ""
	I1028 12:18:31.231716  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.231726  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:31.231735  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:31.231793  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:31.268785  186170 cri.go:89] found id: ""
	I1028 12:18:31.268813  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.268824  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:31.268832  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:31.268901  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:31.307450  186170 cri.go:89] found id: ""
	I1028 12:18:31.307475  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.307483  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:31.307489  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:31.307539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:31.342965  186170 cri.go:89] found id: ""
	I1028 12:18:31.342999  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.343011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:31.343019  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:31.343084  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:31.380275  186170 cri.go:89] found id: ""
	I1028 12:18:31.380307  186170 logs.go:282] 0 containers: []
	W1028 12:18:31.380317  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:31.380329  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:31.380343  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:31.430198  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:31.430249  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:31.446355  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:31.446387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:31.530708  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:31.530738  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:31.530754  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:31.614033  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:31.614079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:32.556500  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.557446  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.214627  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:35.214782  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:33.772856  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:36.265722  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:34.156345  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:34.169766  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:34.169829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:34.208855  186170 cri.go:89] found id: ""
	I1028 12:18:34.208888  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.208903  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:34.208910  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:34.208967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:34.258485  186170 cri.go:89] found id: ""
	I1028 12:18:34.258515  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.258524  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:34.258531  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:34.258593  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:34.294139  186170 cri.go:89] found id: ""
	I1028 12:18:34.294168  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.294176  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:34.294182  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:34.294242  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:34.329848  186170 cri.go:89] found id: ""
	I1028 12:18:34.329881  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.329892  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:34.329900  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:34.329967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:34.368223  186170 cri.go:89] found id: ""
	I1028 12:18:34.368249  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.368256  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:34.368262  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:34.368310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:34.405101  186170 cri.go:89] found id: ""
	I1028 12:18:34.405133  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.405142  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:34.405149  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:34.405207  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:34.441998  186170 cri.go:89] found id: ""
	I1028 12:18:34.442034  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.442045  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:34.442053  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:34.442118  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:34.478842  186170 cri.go:89] found id: ""
	I1028 12:18:34.478877  186170 logs.go:282] 0 containers: []
	W1028 12:18:34.478888  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:34.478901  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:34.478917  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:34.532950  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:34.532991  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:34.548614  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:34.548643  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:34.623699  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:34.623726  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:34.623743  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:34.702104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:34.702142  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.259720  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:37.276526  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:37.276592  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:37.325783  186170 cri.go:89] found id: ""
	I1028 12:18:37.325823  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.325838  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:37.325847  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:37.325916  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:37.362754  186170 cri.go:89] found id: ""
	I1028 12:18:37.362784  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.362805  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:37.362813  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:37.362891  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:37.400428  186170 cri.go:89] found id: ""
	I1028 12:18:37.400465  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.400477  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:37.400485  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:37.400548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:37.438792  186170 cri.go:89] found id: ""
	I1028 12:18:37.438834  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.438846  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:37.438855  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:37.438918  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:37.477032  186170 cri.go:89] found id: ""
	I1028 12:18:37.477115  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.477126  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:37.477132  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:37.477199  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:37.514834  186170 cri.go:89] found id: ""
	I1028 12:18:37.514866  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.514878  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:37.514888  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:37.514975  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:37.560797  186170 cri.go:89] found id: ""
	I1028 12:18:37.560821  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.560828  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:37.560835  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:37.560889  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:37.611126  186170 cri.go:89] found id: ""
	I1028 12:18:37.611156  186170 logs.go:282] 0 containers: []
	W1028 12:18:37.611165  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:37.611177  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:37.611200  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:37.654809  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:37.654849  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:37.713519  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:37.713572  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:37.728043  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:37.728081  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:37.806662  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:37.806684  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:37.806702  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:36.559507  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.056993  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:37.215498  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:39.715541  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:38.266461  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.266611  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:42.268638  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:40.388380  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:40.402330  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:40.402405  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:40.444948  186170 cri.go:89] found id: ""
	I1028 12:18:40.444978  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.444990  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:40.445002  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:40.445062  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:40.482342  186170 cri.go:89] found id: ""
	I1028 12:18:40.482378  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.482387  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:40.482393  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:40.482457  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:40.532277  186170 cri.go:89] found id: ""
	I1028 12:18:40.532307  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.532318  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:40.532326  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:40.532388  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:40.579092  186170 cri.go:89] found id: ""
	I1028 12:18:40.579122  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.579130  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:40.579136  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:40.579204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:40.617091  186170 cri.go:89] found id: ""
	I1028 12:18:40.617116  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.617124  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:40.617130  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:40.617188  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:40.655830  186170 cri.go:89] found id: ""
	I1028 12:18:40.655861  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.655871  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:40.655879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:40.655949  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:40.693436  186170 cri.go:89] found id: ""
	I1028 12:18:40.693472  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.693480  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:40.693490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:40.693572  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:40.731576  186170 cri.go:89] found id: ""
	I1028 12:18:40.731604  186170 logs.go:282] 0 containers: []
	W1028 12:18:40.731615  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:40.731626  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:40.731642  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:40.782395  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:40.782441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:40.797572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:40.797607  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:40.873037  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:40.873078  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:40.873095  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:40.950913  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:40.950954  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:41.555847  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.558407  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:41.715912  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.716370  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:46.214690  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:44.765752  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:47.266258  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:43.493377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:43.508379  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:43.508453  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:43.546621  186170 cri.go:89] found id: ""
	I1028 12:18:43.546652  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.546660  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:43.546667  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:43.546714  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:43.587430  186170 cri.go:89] found id: ""
	I1028 12:18:43.587455  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.587462  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:43.587468  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:43.587520  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:43.623597  186170 cri.go:89] found id: ""
	I1028 12:18:43.623625  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.623633  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:43.623640  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:43.623702  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:43.661235  186170 cri.go:89] found id: ""
	I1028 12:18:43.661266  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.661274  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:43.661281  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:43.661344  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:43.697400  186170 cri.go:89] found id: ""
	I1028 12:18:43.697437  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.697448  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:43.697457  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:43.697521  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:43.732995  186170 cri.go:89] found id: ""
	I1028 12:18:43.733028  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.733038  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:43.733047  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:43.733115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:43.772570  186170 cri.go:89] found id: ""
	I1028 12:18:43.772595  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.772602  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:43.772608  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:43.772669  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:43.814234  186170 cri.go:89] found id: ""
	I1028 12:18:43.814265  186170 logs.go:282] 0 containers: []
	W1028 12:18:43.814273  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:43.814283  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:43.814295  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:43.868582  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:43.868630  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:43.885098  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:43.885136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:43.967902  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:43.967937  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:43.967955  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:44.048973  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:44.049021  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.592668  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:46.608596  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:46.608664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:46.652750  186170 cri.go:89] found id: ""
	I1028 12:18:46.652777  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.652785  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:46.652790  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:46.652848  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:46.696309  186170 cri.go:89] found id: ""
	I1028 12:18:46.696333  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.696340  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:46.696346  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:46.696396  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:46.741580  186170 cri.go:89] found id: ""
	I1028 12:18:46.741609  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.741620  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:46.741628  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:46.741693  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:46.782589  186170 cri.go:89] found id: ""
	I1028 12:18:46.782620  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.782628  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:46.782635  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:46.782695  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:46.821602  186170 cri.go:89] found id: ""
	I1028 12:18:46.821632  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.821644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:46.821653  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:46.821713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:46.857025  186170 cri.go:89] found id: ""
	I1028 12:18:46.857050  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.857060  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:46.857067  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:46.857115  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:46.893687  186170 cri.go:89] found id: ""
	I1028 12:18:46.893725  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.893737  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:46.893746  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:46.893818  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:46.930334  186170 cri.go:89] found id: ""
	I1028 12:18:46.930367  186170 logs.go:282] 0 containers: []
	W1028 12:18:46.930377  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:46.930385  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:46.930398  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:46.980610  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:46.980650  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:46.995861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:46.995901  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:47.069355  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:47.069383  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:47.069396  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:47.157228  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:47.157284  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:46.056747  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.058377  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.557006  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:48.715456  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:50.716120  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.267222  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:51.765814  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:49.722229  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:49.735404  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:49.735507  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:49.776722  186170 cri.go:89] found id: ""
	I1028 12:18:49.776757  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.776768  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:49.776776  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:49.776844  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:49.812856  186170 cri.go:89] found id: ""
	I1028 12:18:49.812888  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.812898  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:49.812905  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:49.812989  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:49.849483  186170 cri.go:89] found id: ""
	I1028 12:18:49.849516  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.849544  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:49.849603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:49.849672  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:49.886525  186170 cri.go:89] found id: ""
	I1028 12:18:49.886555  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.886566  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:49.886574  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:49.886637  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:49.928249  186170 cri.go:89] found id: ""
	I1028 12:18:49.928281  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.928292  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:49.928299  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:49.928354  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:49.964587  186170 cri.go:89] found id: ""
	I1028 12:18:49.964619  186170 logs.go:282] 0 containers: []
	W1028 12:18:49.964630  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:49.964641  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:49.964704  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:50.002275  186170 cri.go:89] found id: ""
	I1028 12:18:50.002305  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.002314  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:50.002321  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:50.002376  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:50.040949  186170 cri.go:89] found id: ""
	I1028 12:18:50.040979  186170 logs.go:282] 0 containers: []
	W1028 12:18:50.040990  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:50.041003  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:50.041018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:50.086062  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:50.086098  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:50.138786  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:50.138837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:50.152992  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:50.153023  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:50.230432  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:50.230465  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:50.230481  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:52.813001  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:52.825800  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:52.825879  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:52.863852  186170 cri.go:89] found id: ""
	I1028 12:18:52.863882  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.863893  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:52.863901  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:52.863967  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:52.902963  186170 cri.go:89] found id: ""
	I1028 12:18:52.903003  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.903016  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:52.903024  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:52.903098  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:52.950862  186170 cri.go:89] found id: ""
	I1028 12:18:52.950893  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.950903  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:52.950912  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:52.950980  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:52.995840  186170 cri.go:89] found id: ""
	I1028 12:18:52.995872  186170 logs.go:282] 0 containers: []
	W1028 12:18:52.995883  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:52.995891  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:52.995960  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:53.040153  186170 cri.go:89] found id: ""
	I1028 12:18:53.040179  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.040187  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:53.040194  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:53.040256  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:53.077492  186170 cri.go:89] found id: ""
	I1028 12:18:53.077548  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.077561  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:53.077568  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:53.077618  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:53.114930  186170 cri.go:89] found id: ""
	I1028 12:18:53.114962  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.114973  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:53.114981  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:53.115064  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:53.152707  186170 cri.go:89] found id: ""
	I1028 12:18:53.152737  186170 logs.go:282] 0 containers: []
	W1028 12:18:53.152747  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:53.152760  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:53.152777  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:53.195033  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:53.195068  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:53.246464  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:53.246500  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:53.261430  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:53.261456  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:18:52.557045  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.057031  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:53.215817  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:55.714784  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:54.268377  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:56.764471  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:18:53.343518  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:53.343541  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:53.343556  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:55.924584  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:55.938627  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:55.938712  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:55.976319  186170 cri.go:89] found id: ""
	I1028 12:18:55.976354  186170 logs.go:282] 0 containers: []
	W1028 12:18:55.976364  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:55.976372  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:55.976440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:56.013947  186170 cri.go:89] found id: ""
	I1028 12:18:56.013979  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.014002  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:56.014010  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:56.014065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:56.055934  186170 cri.go:89] found id: ""
	I1028 12:18:56.055963  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.055970  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:56.055976  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:56.056030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:56.092766  186170 cri.go:89] found id: ""
	I1028 12:18:56.092798  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.092809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:56.092817  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:56.092883  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:56.129708  186170 cri.go:89] found id: ""
	I1028 12:18:56.129741  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.129748  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:56.129755  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:56.129817  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:56.169640  186170 cri.go:89] found id: ""
	I1028 12:18:56.169684  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.169693  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:56.169700  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:56.169761  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:56.210585  186170 cri.go:89] found id: ""
	I1028 12:18:56.210617  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.210626  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:56.210633  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:56.210683  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:56.248144  186170 cri.go:89] found id: ""
	I1028 12:18:56.248177  186170 logs.go:282] 0 containers: []
	W1028 12:18:56.248189  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:56.248201  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:56.248216  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:56.298962  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:56.299004  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:18:56.313314  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:56.313351  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:56.389450  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:56.389473  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:56.389508  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:56.470888  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:56.470927  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:57.556098  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.057165  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:57.716269  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:00.214149  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:58.765585  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:01.265119  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:18:59.012377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:18:59.025740  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:18:59.025853  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:18:59.063706  186170 cri.go:89] found id: ""
	I1028 12:18:59.063770  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.063782  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:18:59.063794  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:18:59.063855  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:18:59.100543  186170 cri.go:89] found id: ""
	I1028 12:18:59.100573  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.100582  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:18:59.100590  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:18:59.100651  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:18:59.140044  186170 cri.go:89] found id: ""
	I1028 12:18:59.140073  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.140080  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:18:59.140087  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:18:59.140133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:18:59.174872  186170 cri.go:89] found id: ""
	I1028 12:18:59.174905  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.174914  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:18:59.174920  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:18:59.174971  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:18:59.210456  186170 cri.go:89] found id: ""
	I1028 12:18:59.210484  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.210492  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:18:59.210498  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:18:59.210560  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:18:59.248441  186170 cri.go:89] found id: ""
	I1028 12:18:59.248474  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.248485  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:18:59.248494  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:18:59.248558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:18:59.286897  186170 cri.go:89] found id: ""
	I1028 12:18:59.286928  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.286937  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:18:59.286944  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:18:59.286996  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:18:59.323187  186170 cri.go:89] found id: ""
	I1028 12:18:59.323221  186170 logs.go:282] 0 containers: []
	W1028 12:18:59.323232  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:18:59.323244  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:18:59.323260  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:18:59.401126  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:18:59.401156  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:18:59.401171  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:18:59.486673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:18:59.486712  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:18:59.532117  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:18:59.532153  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:18:59.588697  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:18:59.588738  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.104377  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:02.118007  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:02.118092  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:02.157674  186170 cri.go:89] found id: ""
	I1028 12:19:02.157705  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.157715  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:02.157724  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:02.157783  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:02.194407  186170 cri.go:89] found id: ""
	I1028 12:19:02.194437  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.194448  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:02.194456  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:02.194546  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:02.232940  186170 cri.go:89] found id: ""
	I1028 12:19:02.232975  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.232988  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:02.232996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:02.233070  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:02.271554  186170 cri.go:89] found id: ""
	I1028 12:19:02.271595  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.271606  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:02.271613  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:02.271681  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:02.309932  186170 cri.go:89] found id: ""
	I1028 12:19:02.309965  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.309975  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:02.309984  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:02.310044  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:02.345704  186170 cri.go:89] found id: ""
	I1028 12:19:02.345732  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.345740  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:02.345747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:02.345794  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:02.381727  186170 cri.go:89] found id: ""
	I1028 12:19:02.381760  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.381770  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:02.381778  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:02.381841  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:02.417888  186170 cri.go:89] found id: ""
	I1028 12:19:02.417922  186170 logs.go:282] 0 containers: []
	W1028 12:19:02.417933  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:02.417943  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:02.417961  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:02.497427  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:02.497458  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:02.497471  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:02.580562  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:02.580600  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:02.619048  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:02.619087  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:02.677089  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:02.677136  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:02.556763  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.557107  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:02.216779  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:04.714940  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:03.267189  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.268332  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:05.192892  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:05.207240  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:05.207325  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:05.244005  186170 cri.go:89] found id: ""
	I1028 12:19:05.244041  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.244070  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:05.244078  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:05.244130  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:05.285828  186170 cri.go:89] found id: ""
	I1028 12:19:05.285859  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.285869  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:05.285877  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:05.285936  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:05.324666  186170 cri.go:89] found id: ""
	I1028 12:19:05.324694  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.324706  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:05.324713  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:05.324782  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:05.361365  186170 cri.go:89] found id: ""
	I1028 12:19:05.361401  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.361414  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:05.361423  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:05.361485  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:05.399962  186170 cri.go:89] found id: ""
	I1028 12:19:05.399996  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.400007  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:05.400017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:05.400116  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:05.438510  186170 cri.go:89] found id: ""
	I1028 12:19:05.438541  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.438553  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:05.438562  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:05.438624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:05.477168  186170 cri.go:89] found id: ""
	I1028 12:19:05.477204  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.477214  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:05.477222  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:05.477286  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:05.513314  186170 cri.go:89] found id: ""
	I1028 12:19:05.513350  186170 logs.go:282] 0 containers: []
	W1028 12:19:05.513362  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:05.513374  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:05.513388  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:05.568453  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:05.568490  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:05.583833  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:05.583870  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:05.659413  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:05.659438  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:05.659457  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:05.744673  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:05.744714  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.291543  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:08.305747  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:08.305829  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:07.056718  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:09.056994  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:06.715788  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.716850  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:11.215701  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:07.765389  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:10.268458  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:08.350508  186170 cri.go:89] found id: ""
	I1028 12:19:08.350536  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.350544  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:08.350550  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:08.350602  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:08.387432  186170 cri.go:89] found id: ""
	I1028 12:19:08.387463  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.387470  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:08.387476  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:08.387527  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:08.426351  186170 cri.go:89] found id: ""
	I1028 12:19:08.426392  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.426404  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:08.426412  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:08.426478  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:08.467546  186170 cri.go:89] found id: ""
	I1028 12:19:08.467577  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.467586  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:08.467592  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:08.467642  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:08.504317  186170 cri.go:89] found id: ""
	I1028 12:19:08.504347  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.504356  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:08.504363  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:08.504418  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:08.539598  186170 cri.go:89] found id: ""
	I1028 12:19:08.539630  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.539642  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:08.539655  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:08.539713  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:08.578128  186170 cri.go:89] found id: ""
	I1028 12:19:08.578162  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.578173  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:08.578181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:08.578247  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:08.614276  186170 cri.go:89] found id: ""
	I1028 12:19:08.614309  186170 logs.go:282] 0 containers: []
	W1028 12:19:08.614326  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:08.614338  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:08.614354  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:08.691937  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:08.691961  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:08.691977  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:08.773046  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:08.773092  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:08.816419  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:08.816449  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:08.868763  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:08.868811  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.384115  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:11.398325  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:11.398416  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:11.433049  186170 cri.go:89] found id: ""
	I1028 12:19:11.433081  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.433089  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:11.433097  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:11.433151  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:11.469221  186170 cri.go:89] found id: ""
	I1028 12:19:11.469249  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.469259  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:11.469267  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:11.469332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:11.506673  186170 cri.go:89] found id: ""
	I1028 12:19:11.506703  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.506714  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:11.506722  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:11.506802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:11.542657  186170 cri.go:89] found id: ""
	I1028 12:19:11.542684  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.542694  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:11.542702  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:11.542760  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:11.582873  186170 cri.go:89] found id: ""
	I1028 12:19:11.582903  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.582913  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:11.582921  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:11.582990  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:11.619742  186170 cri.go:89] found id: ""
	I1028 12:19:11.619770  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.619784  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:11.619791  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:11.619854  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:11.654169  186170 cri.go:89] found id: ""
	I1028 12:19:11.654200  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.654211  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:11.654220  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:11.654280  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:11.690586  186170 cri.go:89] found id: ""
	I1028 12:19:11.690614  186170 logs.go:282] 0 containers: []
	W1028 12:19:11.690624  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:11.690637  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:11.690656  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:11.744337  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:11.744378  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:11.758405  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:11.758446  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:11.843252  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:11.843278  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:11.843289  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:11.924104  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:11.924140  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:11.559182  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.057546  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:13.216963  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:15.715550  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:12.764850  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.766597  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.265687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:14.464177  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:14.478351  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:14.478423  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:14.518159  186170 cri.go:89] found id: ""
	I1028 12:19:14.518189  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.518200  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:14.518209  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:14.518260  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:14.565688  186170 cri.go:89] found id: ""
	I1028 12:19:14.565722  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.565734  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:14.565742  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:14.565802  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:14.601994  186170 cri.go:89] found id: ""
	I1028 12:19:14.602021  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.602029  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:14.602054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:14.602122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:14.640100  186170 cri.go:89] found id: ""
	I1028 12:19:14.640142  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.640156  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:14.640166  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:14.640237  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:14.675395  186170 cri.go:89] found id: ""
	I1028 12:19:14.675422  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.675430  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:14.675436  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:14.675494  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:14.715365  186170 cri.go:89] found id: ""
	I1028 12:19:14.715393  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.715404  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:14.715413  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:14.715466  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:14.761335  186170 cri.go:89] found id: ""
	I1028 12:19:14.761363  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.761373  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:14.761381  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:14.761446  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:14.800412  186170 cri.go:89] found id: ""
	I1028 12:19:14.800449  186170 logs.go:282] 0 containers: []
	W1028 12:19:14.800461  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:14.800472  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:14.800486  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:14.882189  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:14.882227  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:14.926725  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:14.926752  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:14.979280  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:14.979329  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:14.993985  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:14.994019  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:15.063407  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.564258  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:17.578611  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:17.578679  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:17.615753  186170 cri.go:89] found id: ""
	I1028 12:19:17.615784  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.615797  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:17.615805  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:17.615864  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:17.650812  186170 cri.go:89] found id: ""
	I1028 12:19:17.650851  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.650862  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:17.650870  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:17.651014  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:17.693006  186170 cri.go:89] found id: ""
	I1028 12:19:17.693039  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.693048  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:17.693054  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:17.693104  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:17.733120  186170 cri.go:89] found id: ""
	I1028 12:19:17.733146  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.733153  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:17.733160  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:17.733212  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:17.773002  186170 cri.go:89] found id: ""
	I1028 12:19:17.773029  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.773036  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:17.773042  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:17.773097  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:17.812560  186170 cri.go:89] found id: ""
	I1028 12:19:17.812590  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.812597  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:17.812603  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:17.812653  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:17.848307  186170 cri.go:89] found id: ""
	I1028 12:19:17.848341  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.848349  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:17.848355  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:17.848402  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:17.888184  186170 cri.go:89] found id: ""
	I1028 12:19:17.888210  186170 logs.go:282] 0 containers: []
	W1028 12:19:17.888217  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:17.888226  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:17.888238  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:17.901662  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:17.901692  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:17.975611  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:17.975634  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:17.975647  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:18.054762  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:18.054801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:18.101269  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:18.101302  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:16.057835  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:18.556414  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:17.716374  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.216629  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:19.266849  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:21.267040  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:20.655292  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:20.671085  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:20.671161  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:20.715368  186170 cri.go:89] found id: ""
	I1028 12:19:20.715397  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.715407  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:20.715415  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:20.715476  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:20.762337  186170 cri.go:89] found id: ""
	I1028 12:19:20.762366  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.762374  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:20.762379  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:20.762437  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:20.804710  186170 cri.go:89] found id: ""
	I1028 12:19:20.804740  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.804747  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:20.804759  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:20.804813  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:20.841158  186170 cri.go:89] found id: ""
	I1028 12:19:20.841189  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.841199  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:20.841208  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:20.841277  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:20.883976  186170 cri.go:89] found id: ""
	I1028 12:19:20.884016  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.884027  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:20.884035  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:20.884105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:20.930155  186170 cri.go:89] found id: ""
	I1028 12:19:20.930186  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.930194  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:20.930201  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:20.930265  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:20.967805  186170 cri.go:89] found id: ""
	I1028 12:19:20.967832  186170 logs.go:282] 0 containers: []
	W1028 12:19:20.967840  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:20.967847  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:20.967896  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:21.020010  186170 cri.go:89] found id: ""
	I1028 12:19:21.020038  186170 logs.go:282] 0 containers: []
	W1028 12:19:21.020046  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:21.020055  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:21.020079  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:21.081013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:21.081054  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:21.096709  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:21.096741  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:21.172935  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:21.172957  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:21.172970  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:21.248909  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:21.248949  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:21.056990  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.057233  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:25.555717  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:22.715323  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:24.715818  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.765935  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:26.264839  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:23.793748  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:23.809036  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:23.809107  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:23.848021  186170 cri.go:89] found id: ""
	I1028 12:19:23.848051  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.848064  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:23.848070  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:23.848122  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:23.885253  186170 cri.go:89] found id: ""
	I1028 12:19:23.885278  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.885294  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:23.885302  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:23.885360  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:23.923423  186170 cri.go:89] found id: ""
	I1028 12:19:23.923475  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.923484  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:23.923490  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:23.923554  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:23.963761  186170 cri.go:89] found id: ""
	I1028 12:19:23.963793  186170 logs.go:282] 0 containers: []
	W1028 12:19:23.963809  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:23.963820  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:23.963890  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:24.001402  186170 cri.go:89] found id: ""
	I1028 12:19:24.001431  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.001440  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:24.001447  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:24.001512  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:24.042367  186170 cri.go:89] found id: ""
	I1028 12:19:24.042400  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.042410  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:24.042419  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:24.042480  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:24.081838  186170 cri.go:89] found id: ""
	I1028 12:19:24.081865  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.081873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:24.081879  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:24.081932  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:24.117066  186170 cri.go:89] found id: ""
	I1028 12:19:24.117096  186170 logs.go:282] 0 containers: []
	W1028 12:19:24.117104  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:24.117113  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:24.117125  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:24.156892  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:24.156928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:24.210595  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:24.210631  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:24.226214  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:24.226248  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:24.304750  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:24.304775  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:24.304792  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:26.887059  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:26.901656  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:26.901735  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:26.944377  186170 cri.go:89] found id: ""
	I1028 12:19:26.944407  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.944416  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:26.944425  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:26.944487  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:26.980794  186170 cri.go:89] found id: ""
	I1028 12:19:26.980827  186170 logs.go:282] 0 containers: []
	W1028 12:19:26.980835  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:26.980841  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:26.980907  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:27.023661  186170 cri.go:89] found id: ""
	I1028 12:19:27.023686  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.023694  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:27.023701  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:27.023753  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:27.062325  186170 cri.go:89] found id: ""
	I1028 12:19:27.062353  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.062361  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:27.062369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:27.062417  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:27.101200  186170 cri.go:89] found id: ""
	I1028 12:19:27.101230  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.101237  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:27.101243  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:27.101300  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:27.139566  186170 cri.go:89] found id: ""
	I1028 12:19:27.139591  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.139598  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:27.139605  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:27.139664  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:27.183931  186170 cri.go:89] found id: ""
	I1028 12:19:27.183959  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.183968  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:27.183996  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:27.184065  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:27.226978  186170 cri.go:89] found id: ""
	I1028 12:19:27.227012  186170 logs.go:282] 0 containers: []
	W1028 12:19:27.227027  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:27.227038  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:27.227067  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:27.279752  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:27.279790  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:27.293477  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:27.293504  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:27.365813  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:27.365836  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:27.365850  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:27.458409  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:27.458466  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:27.556370  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.057786  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:27.216093  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:29.715861  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:28.265912  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.266993  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:32.267566  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:30.023363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:30.036965  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:30.037032  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:30.077599  186170 cri.go:89] found id: ""
	I1028 12:19:30.077627  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.077635  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:30.077642  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:30.077691  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:30.115071  186170 cri.go:89] found id: ""
	I1028 12:19:30.115103  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.115113  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:30.115121  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:30.115189  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:30.150636  186170 cri.go:89] found id: ""
	I1028 12:19:30.150665  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.150678  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:30.150684  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:30.150747  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:30.188339  186170 cri.go:89] found id: ""
	I1028 12:19:30.188380  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.188390  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:30.188397  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:30.188452  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:30.224072  186170 cri.go:89] found id: ""
	I1028 12:19:30.224102  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.224113  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:30.224121  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:30.224185  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:30.258784  186170 cri.go:89] found id: ""
	I1028 12:19:30.258822  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.258834  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:30.258842  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:30.258903  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:30.302495  186170 cri.go:89] found id: ""
	I1028 12:19:30.302527  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.302535  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:30.302541  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:30.302590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:30.339170  186170 cri.go:89] found id: ""
	I1028 12:19:30.339201  186170 logs.go:282] 0 containers: []
	W1028 12:19:30.339213  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:30.339223  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:30.339236  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:30.396664  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:30.396700  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:30.411609  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:30.411638  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:30.484168  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:30.484196  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:30.484212  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:30.567664  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:30.567704  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:33.111268  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:33.125143  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:33.125229  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:33.168662  186170 cri.go:89] found id: ""
	I1028 12:19:33.168701  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.168712  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:33.168722  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:33.168792  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:33.222421  186170 cri.go:89] found id: ""
	I1028 12:19:33.222451  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.222463  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:33.222471  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:33.222536  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:33.275637  186170 cri.go:89] found id: ""
	I1028 12:19:33.275669  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.275680  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:33.275689  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:33.275751  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:32.555888  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.556782  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:31.716178  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.213813  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.213999  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:34.764307  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:36.766217  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:33.325787  186170 cri.go:89] found id: ""
	I1028 12:19:33.325818  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.325830  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:33.325840  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:33.325900  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:33.361597  186170 cri.go:89] found id: ""
	I1028 12:19:33.361634  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.361644  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:33.361652  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:33.361744  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:33.401838  186170 cri.go:89] found id: ""
	I1028 12:19:33.401866  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.401874  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:33.401880  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:33.401941  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:33.439315  186170 cri.go:89] found id: ""
	I1028 12:19:33.439342  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.439351  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:33.439359  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:33.439422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:33.479140  186170 cri.go:89] found id: ""
	I1028 12:19:33.479177  186170 logs.go:282] 0 containers: []
	W1028 12:19:33.479188  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:33.479206  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:33.479222  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:33.534059  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:33.534102  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:33.549379  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:33.549416  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:33.626567  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:33.626603  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:33.626619  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:33.702398  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:33.702441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.250145  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:36.265123  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:36.265193  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:36.304048  186170 cri.go:89] found id: ""
	I1028 12:19:36.304078  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.304087  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:36.304093  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:36.304141  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:36.348611  186170 cri.go:89] found id: ""
	I1028 12:19:36.348649  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.348660  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:36.348672  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:36.348739  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:36.390510  186170 cri.go:89] found id: ""
	I1028 12:19:36.390543  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.390555  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:36.390563  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:36.390627  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:36.430465  186170 cri.go:89] found id: ""
	I1028 12:19:36.430489  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.430496  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:36.430503  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:36.430556  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:36.472189  186170 cri.go:89] found id: ""
	I1028 12:19:36.472216  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.472226  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:36.472234  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:36.472332  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:36.510029  186170 cri.go:89] found id: ""
	I1028 12:19:36.510057  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.510065  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:36.510073  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:36.510133  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:36.548556  186170 cri.go:89] found id: ""
	I1028 12:19:36.548581  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.548589  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:36.548595  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:36.548641  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:36.592965  186170 cri.go:89] found id: ""
	I1028 12:19:36.592993  186170 logs.go:282] 0 containers: []
	W1028 12:19:36.593002  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:36.593013  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:36.593032  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:36.608843  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:36.608878  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:36.680629  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:36.680655  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:36.680672  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:36.768605  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:36.768636  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:36.815293  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:36.815334  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:37.056333  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.559461  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:38.214406  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:40.214795  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.264988  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:41.267329  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:39.369371  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:39.382819  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:39.382905  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:39.421953  186170 cri.go:89] found id: ""
	I1028 12:19:39.421990  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.422018  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:39.422028  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:39.422088  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:39.457426  186170 cri.go:89] found id: ""
	I1028 12:19:39.457461  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.457478  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:39.457484  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:39.457558  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:39.494983  186170 cri.go:89] found id: ""
	I1028 12:19:39.495008  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.495018  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:39.495026  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:39.495105  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:39.530187  186170 cri.go:89] found id: ""
	I1028 12:19:39.530221  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.530233  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:39.530242  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:39.530308  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:39.571088  186170 cri.go:89] found id: ""
	I1028 12:19:39.571123  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.571133  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:39.571142  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:39.571204  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:39.605684  186170 cri.go:89] found id: ""
	I1028 12:19:39.605719  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.605731  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:39.605739  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:39.605804  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:39.639083  186170 cri.go:89] found id: ""
	I1028 12:19:39.639115  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.639125  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:39.639133  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:39.639195  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:39.676273  186170 cri.go:89] found id: ""
	I1028 12:19:39.676310  186170 logs.go:282] 0 containers: []
	W1028 12:19:39.676321  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:39.676332  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:39.676349  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:39.733153  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:39.733190  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:39.748475  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:39.748513  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:39.823884  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:39.823906  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:39.823920  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:39.903711  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:39.903763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.447237  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:42.460741  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:42.460822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:42.500518  186170 cri.go:89] found id: ""
	I1028 12:19:42.500553  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.500565  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:42.500574  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:42.500636  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:42.542836  186170 cri.go:89] found id: ""
	I1028 12:19:42.542867  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.542875  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:42.542882  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:42.542943  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:42.581271  186170 cri.go:89] found id: ""
	I1028 12:19:42.581303  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.581322  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:42.581331  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:42.581382  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:42.616772  186170 cri.go:89] found id: ""
	I1028 12:19:42.616796  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.616803  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:42.616809  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:42.616858  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:42.650467  186170 cri.go:89] found id: ""
	I1028 12:19:42.650504  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.650515  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:42.650524  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:42.650590  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:42.688677  186170 cri.go:89] found id: ""
	I1028 12:19:42.688713  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.688726  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:42.688734  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:42.688796  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:42.727141  186170 cri.go:89] found id: ""
	I1028 12:19:42.727167  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.727174  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:42.727181  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:42.727231  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:42.767373  186170 cri.go:89] found id: ""
	I1028 12:19:42.767404  186170 logs.go:282] 0 containers: []
	W1028 12:19:42.767415  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:42.767425  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:42.767438  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:42.818474  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:42.818511  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:42.832181  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:42.832210  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:42.905428  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:42.905450  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:42.905465  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:42.985614  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:42.985653  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:42.056568  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:44.057256  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:42.715261  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.215472  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:43.765595  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.766087  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:45.527361  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:45.541487  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:45.541574  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:45.579562  186170 cri.go:89] found id: ""
	I1028 12:19:45.579591  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.579600  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:45.579606  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:45.579666  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:45.614461  186170 cri.go:89] found id: ""
	I1028 12:19:45.614494  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.614504  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:45.614512  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:45.614575  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:45.651495  186170 cri.go:89] found id: ""
	I1028 12:19:45.651538  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.651550  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:45.651558  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:45.651619  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:45.691664  186170 cri.go:89] found id: ""
	I1028 12:19:45.691699  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.691710  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:45.691718  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:45.691785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:45.730284  186170 cri.go:89] found id: ""
	I1028 12:19:45.730325  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.730341  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:45.730348  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:45.730410  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:45.766524  186170 cri.go:89] found id: ""
	I1028 12:19:45.766554  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.766565  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:45.766573  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:45.766630  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:45.803353  186170 cri.go:89] found id: ""
	I1028 12:19:45.803381  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.803393  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:45.803400  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:45.803468  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:45.842928  186170 cri.go:89] found id: ""
	I1028 12:19:45.842953  186170 logs.go:282] 0 containers: []
	W1028 12:19:45.842960  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:45.842968  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:45.842979  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:45.921782  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:45.921809  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:45.921826  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:45.997269  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:45.997321  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:46.036008  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:46.036042  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:46.090242  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:46.090282  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:46.058519  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.556533  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:47.215644  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:49.715563  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.266115  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:50.268535  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:52.271227  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:48.607052  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:48.620745  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:48.620816  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:48.657550  186170 cri.go:89] found id: ""
	I1028 12:19:48.657582  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.657592  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:48.657601  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:48.657676  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:48.695514  186170 cri.go:89] found id: ""
	I1028 12:19:48.695542  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.695549  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:48.695555  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:48.695603  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:48.733589  186170 cri.go:89] found id: ""
	I1028 12:19:48.733616  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.733624  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:48.733631  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:48.733680  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:48.768340  186170 cri.go:89] found id: ""
	I1028 12:19:48.768370  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.768378  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:48.768384  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:48.768435  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:48.818057  186170 cri.go:89] found id: ""
	I1028 12:19:48.818086  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.818096  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:48.818105  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:48.818169  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:48.854663  186170 cri.go:89] found id: ""
	I1028 12:19:48.854695  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.854705  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:48.854715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:48.854785  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:48.888919  186170 cri.go:89] found id: ""
	I1028 12:19:48.888949  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.888960  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:48.888969  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:48.889030  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:48.923871  186170 cri.go:89] found id: ""
	I1028 12:19:48.923900  186170 logs.go:282] 0 containers: []
	W1028 12:19:48.923908  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:48.923917  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:48.923928  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:48.977985  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:48.978025  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:48.992861  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:48.992893  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:49.071925  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:49.071952  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:49.071969  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:49.149743  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:49.149784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.693881  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:51.708017  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:51.708079  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:51.748837  186170 cri.go:89] found id: ""
	I1028 12:19:51.748872  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.748883  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:51.748892  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:51.748957  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:51.793684  186170 cri.go:89] found id: ""
	I1028 12:19:51.793716  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.793733  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:51.793741  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:51.793803  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:51.832104  186170 cri.go:89] found id: ""
	I1028 12:19:51.832140  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.832151  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:51.832159  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:51.832225  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:51.866214  186170 cri.go:89] found id: ""
	I1028 12:19:51.866250  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.866264  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:51.866270  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:51.866345  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:51.909073  186170 cri.go:89] found id: ""
	I1028 12:19:51.909100  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.909107  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:51.909113  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:51.909160  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:51.949202  186170 cri.go:89] found id: ""
	I1028 12:19:51.949231  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.949239  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:51.949245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:51.949306  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:51.990977  186170 cri.go:89] found id: ""
	I1028 12:19:51.991004  186170 logs.go:282] 0 containers: []
	W1028 12:19:51.991011  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:51.991018  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:51.991069  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:52.027180  186170 cri.go:89] found id: ""
	I1028 12:19:52.027215  186170 logs.go:282] 0 containers: []
	W1028 12:19:52.027226  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:52.027237  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:52.027259  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:52.080482  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:52.080536  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:52.097572  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:52.097612  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:52.173055  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:52.173095  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:52.173113  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:52.249950  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:52.249995  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:51.056089  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:53.056973  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:55.057853  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:51.716787  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.214943  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.765208  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:57.267687  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:54.794765  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:54.809435  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:54.809548  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:54.846763  186170 cri.go:89] found id: ""
	I1028 12:19:54.846793  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.846805  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:54.846815  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:54.846876  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:54.885359  186170 cri.go:89] found id: ""
	I1028 12:19:54.885396  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.885409  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:54.885417  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:54.885481  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:54.922612  186170 cri.go:89] found id: ""
	I1028 12:19:54.922639  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.922650  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:54.922659  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:54.922722  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:54.958406  186170 cri.go:89] found id: ""
	I1028 12:19:54.958439  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.958450  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:54.958459  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:54.958525  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:54.995319  186170 cri.go:89] found id: ""
	I1028 12:19:54.995350  186170 logs.go:282] 0 containers: []
	W1028 12:19:54.995361  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:54.995370  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:54.995440  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:55.032511  186170 cri.go:89] found id: ""
	I1028 12:19:55.032543  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.032551  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:55.032559  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:55.032624  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:55.073196  186170 cri.go:89] found id: ""
	I1028 12:19:55.073226  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.073238  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:55.073245  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:55.073310  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:55.113726  186170 cri.go:89] found id: ""
	I1028 12:19:55.113754  186170 logs.go:282] 0 containers: []
	W1028 12:19:55.113762  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:55.113771  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:55.113787  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:55.164402  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:55.164442  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:55.180729  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:55.180763  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:19:55.254437  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:55.254466  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:55.254483  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:19:55.341392  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:55.341441  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:57.883896  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:19:57.897429  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:19:57.897539  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:19:57.933084  186170 cri.go:89] found id: ""
	I1028 12:19:57.933109  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.933118  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:19:57.933127  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:19:57.933198  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:19:57.971244  186170 cri.go:89] found id: ""
	I1028 12:19:57.971276  186170 logs.go:282] 0 containers: []
	W1028 12:19:57.971289  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:19:57.971298  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:19:57.971361  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:19:58.007916  186170 cri.go:89] found id: ""
	I1028 12:19:58.007952  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.007963  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:19:58.007972  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:19:58.008050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:19:58.043042  186170 cri.go:89] found id: ""
	I1028 12:19:58.043084  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.043094  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:19:58.043103  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:19:58.043172  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:19:58.080277  186170 cri.go:89] found id: ""
	I1028 12:19:58.080314  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.080324  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:19:58.080332  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:19:58.080395  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:19:58.117254  186170 cri.go:89] found id: ""
	I1028 12:19:58.117292  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.117301  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:19:58.117308  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:19:58.117356  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:19:58.152830  186170 cri.go:89] found id: ""
	I1028 12:19:58.152862  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.152873  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:19:58.152881  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:19:58.152946  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:19:58.190229  186170 cri.go:89] found id: ""
	I1028 12:19:58.190259  186170 logs.go:282] 0 containers: []
	W1028 12:19:58.190270  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:19:58.190281  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:19:58.190296  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:19:58.231792  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:19:58.231823  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:19:58.291189  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:19:58.291233  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:19:58.307804  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:19:58.307837  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:19:57.556056  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.557091  185942 pod_ready.go:103] pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:00.050404  185942 pod_ready.go:82] duration metric: took 4m0.000726571s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:00.050457  185942 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k69kz" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:00.050479  185942 pod_ready.go:39] duration metric: took 4m12.759391454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:00.050506  185942 kubeadm.go:597] duration metric: took 4m20.427916933s to restartPrimaryControlPlane
	W1028 12:20:00.050569  185942 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:00.050616  185942 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:19:56.715048  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.215821  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:19:59.769397  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:02.265702  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	W1028 12:19:58.384490  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:19:58.384515  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:19:58.384530  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:00.963569  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:00.977292  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:20:00.977363  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:20:01.017161  186170 cri.go:89] found id: ""
	I1028 12:20:01.017190  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.017198  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:20:01.017204  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:20:01.017254  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:20:01.054651  186170 cri.go:89] found id: ""
	I1028 12:20:01.054687  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.054698  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:20:01.054705  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:20:01.054768  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:20:01.092934  186170 cri.go:89] found id: ""
	I1028 12:20:01.092968  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.092979  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:20:01.092988  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:20:01.093048  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:20:01.134463  186170 cri.go:89] found id: ""
	I1028 12:20:01.134499  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.134510  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:20:01.134519  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:20:01.134580  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:20:01.171922  186170 cri.go:89] found id: ""
	I1028 12:20:01.171960  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.171970  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:20:01.171978  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:20:01.172050  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:20:01.208664  186170 cri.go:89] found id: ""
	I1028 12:20:01.208694  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.208703  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:20:01.208715  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:20:01.208781  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:20:01.248207  186170 cri.go:89] found id: ""
	I1028 12:20:01.248242  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.248251  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:20:01.248258  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:20:01.248318  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:20:01.289182  186170 cri.go:89] found id: ""
	I1028 12:20:01.289212  186170 logs.go:282] 0 containers: []
	W1028 12:20:01.289222  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:20:01.289233  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:20:01.289277  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:20:01.334646  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:20:01.334679  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:20:01.396212  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:20:01.396255  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:20:01.411774  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:20:01.411801  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:20:01.497745  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:20:01.497772  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:20:01.497784  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:20:01.715264  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.216628  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.765386  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:06.765802  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:04.092363  186170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:04.106585  186170 kubeadm.go:597] duration metric: took 4m1.83229859s to restartPrimaryControlPlane
	W1028 12:20:04.106657  186170 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:04.106678  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:07.549703  186170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.442997936s)
	I1028 12:20:07.549781  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:07.565304  186170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:07.577919  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:07.590433  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:07.590461  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:07.590514  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:07.600793  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:07.600858  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:07.611331  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:07.621191  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:07.621256  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:07.631722  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.642180  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:07.642255  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:07.654425  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:07.664696  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:07.664755  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:07.675272  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:07.902931  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:06.715439  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.214561  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.216343  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:09.265899  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:11.764867  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:13.716362  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.214893  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:14.264333  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:16.765340  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:18.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:20.715790  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:19.270934  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:21.764931  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:22.715880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:25.216499  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:23.766240  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.271567  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:26.353961  185942 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.303321788s)
	I1028 12:20:26.354038  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:26.373066  185942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:20:26.386209  185942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:20:26.398568  185942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:20:26.398591  185942 kubeadm.go:157] found existing configuration files:
	
	I1028 12:20:26.398634  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:20:26.410916  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:20:26.410976  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:20:26.423771  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:20:26.435883  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:20:26.435961  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:20:26.448506  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.460449  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:20:26.460506  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:20:26.472817  185942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:20:26.483653  185942 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:20:26.483743  185942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:20:26.494435  185942 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:20:26.682378  185942 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:20:27.715587  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:29.717407  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:28.766206  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:30.766289  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.820344  185942 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:20:35.820446  185942 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:20:35.820555  185942 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:20:35.820688  185942 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:20:35.820812  185942 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:20:35.820902  185942 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:20:35.823423  185942 out.go:235]   - Generating certificates and keys ...
	I1028 12:20:35.823594  185942 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:20:35.823700  185942 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:20:35.823804  185942 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:20:35.823893  185942 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:20:35.824001  185942 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:20:35.824082  185942 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:20:35.824167  185942 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:20:35.824255  185942 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:20:35.824360  185942 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:20:35.824445  185942 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:20:35.824504  185942 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:20:35.824566  185942 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:20:35.824622  185942 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:20:35.824725  185942 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:20:35.824805  185942 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:20:35.824944  185942 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:20:35.825058  185942 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:20:35.825209  185942 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:20:35.825300  185942 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:20:35.826890  185942 out.go:235]   - Booting up control plane ...
	I1028 12:20:35.827007  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:20:35.827077  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:20:35.827142  185942 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:20:35.827285  185942 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:20:35.827420  185942 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:20:35.827487  185942 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:20:35.827705  185942 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:20:35.827848  185942 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:20:35.827943  185942 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.264999ms
	I1028 12:20:35.828059  185942 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:20:35.828130  185942 kubeadm.go:310] [api-check] The API server is healthy after 5.502732581s
	I1028 12:20:35.828299  185942 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:20:35.828472  185942 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:20:35.828523  185942 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:20:35.828712  185942 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-709250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:20:35.828764  185942 kubeadm.go:310] [bootstrap-token] Using token: srdxzz.lxk56bs7sgkeocij
	I1028 12:20:35.830228  185942 out.go:235]   - Configuring RBAC rules ...
	I1028 12:20:35.830335  185942 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:20:35.830422  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:20:35.830563  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:20:35.830729  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:20:35.830842  185942 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:20:35.830928  185942 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:20:35.831065  185942 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:20:35.831122  185942 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:20:35.831174  185942 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:20:35.831181  185942 kubeadm.go:310] 
	I1028 12:20:35.831229  185942 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:20:35.831237  185942 kubeadm.go:310] 
	I1028 12:20:35.831302  185942 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:20:35.831313  185942 kubeadm.go:310] 
	I1028 12:20:35.831356  185942 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:20:35.831439  185942 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:20:35.831517  185942 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:20:35.831531  185942 kubeadm.go:310] 
	I1028 12:20:35.831616  185942 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:20:35.831628  185942 kubeadm.go:310] 
	I1028 12:20:35.831678  185942 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:20:35.831682  185942 kubeadm.go:310] 
	I1028 12:20:35.831730  185942 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:20:35.831809  185942 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:20:35.831921  185942 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:20:35.831933  185942 kubeadm.go:310] 
	I1028 12:20:35.832041  185942 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:20:35.832141  185942 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:20:35.832150  185942 kubeadm.go:310] 
	I1028 12:20:35.832249  185942 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832373  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:20:35.832404  185942 kubeadm.go:310] 	--control-plane 
	I1028 12:20:35.832414  185942 kubeadm.go:310] 
	I1028 12:20:35.832516  185942 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:20:35.832524  185942 kubeadm.go:310] 
	I1028 12:20:35.832642  185942 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token srdxzz.lxk56bs7sgkeocij \
	I1028 12:20:35.832812  185942 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:20:35.832833  185942 cni.go:84] Creating CNI manager for ""
	I1028 12:20:35.832843  185942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:20:35.834428  185942 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:20:35.835603  185942 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:20:35.847857  185942 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:20:35.867921  185942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:20:35.868088  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:35.868107  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-709250 minikube.k8s.io/updated_at=2024_10_28T12_20_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=embed-certs-709250 minikube.k8s.io/primary=true
	I1028 12:20:35.908233  185942 ops.go:34] apiserver oom_adj: -16
	I1028 12:20:32.215299  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:34.716880  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:32.766922  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:35.267132  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:36.121114  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:36.621188  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.122032  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:37.621405  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.122105  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:38.621960  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.122142  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:39.622093  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.121643  185942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:20:40.287609  185942 kubeadm.go:1113] duration metric: took 4.419612649s to wait for elevateKubeSystemPrivileges
	I1028 12:20:40.287656  185942 kubeadm.go:394] duration metric: took 5m0.720591132s to StartCluster
	I1028 12:20:40.287703  185942 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.287814  185942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:20:40.290472  185942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:20:40.290787  185942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:20:40.291051  185942 config.go:182] Loaded profile config "embed-certs-709250": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:20:40.290926  185942 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:20:40.291125  185942 addons.go:69] Setting default-storageclass=true in profile "embed-certs-709250"
	I1028 12:20:40.291126  185942 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-709250"
	I1028 12:20:40.291142  185942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-709250"
	I1028 12:20:40.291148  185942 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-709250"
	W1028 12:20:40.291158  185942 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:20:40.291182  185942 addons.go:69] Setting metrics-server=true in profile "embed-certs-709250"
	I1028 12:20:40.291220  185942 addons.go:234] Setting addon metrics-server=true in "embed-certs-709250"
	W1028 12:20:40.291233  185942 addons.go:243] addon metrics-server should already be in state true
	I1028 12:20:40.291282  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291195  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.291593  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291631  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291727  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291771  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.291786  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.291813  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.292877  185942 out.go:177] * Verifying Kubernetes components...
	I1028 12:20:40.294858  185942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:20:40.310225  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I1028 12:20:40.310814  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.311524  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.311552  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.311961  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.312174  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.312867  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1028 12:20:40.312901  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I1028 12:20:40.313354  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313389  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.313964  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.313987  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.313967  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.314040  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.314365  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314428  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.314883  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.314907  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.315710  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.315744  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.316210  185942 addons.go:234] Setting addon default-storageclass=true in "embed-certs-709250"
	W1028 12:20:40.316229  185942 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:20:40.316261  185942 host.go:66] Checking if "embed-certs-709250" exists ...
	I1028 12:20:40.316619  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.316648  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.331940  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1028 12:20:40.332732  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.333487  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.333537  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.333932  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.334145  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.336054  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1028 12:20:40.336291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.336441  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337079  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.337117  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.337211  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I1028 12:20:40.337597  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.337998  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338171  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.338189  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.338291  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.338925  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.338972  185942 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:20:40.339570  185942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:20:40.339621  185942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:20:40.340197  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.341080  185942 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.341099  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:20:40.341115  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.341872  185942 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:20:40.343244  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:20:40.343278  185942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:20:40.343308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.344718  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345186  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.345216  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.345457  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.345666  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.345842  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.346053  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.346977  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347514  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.347546  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.347739  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.347936  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.348069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.348236  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.357912  185942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I1028 12:20:40.358358  185942 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:20:40.358838  185942 main.go:141] libmachine: Using API Version  1
	I1028 12:20:40.358858  185942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:20:40.359224  185942 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:20:40.359441  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetState
	I1028 12:20:40.361308  185942 main.go:141] libmachine: (embed-certs-709250) Calling .DriverName
	I1028 12:20:40.361630  185942 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.361654  185942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:20:40.361675  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHHostname
	I1028 12:20:40.365789  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366319  185942 main.go:141] libmachine: (embed-certs-709250) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:3b:0d", ip: ""} in network mk-embed-certs-709250: {Iface:virbr3 ExpiryTime:2024-10-28 13:15:25 +0000 UTC Type:0 Mac:52:54:00:39:3b:0d Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:embed-certs-709250 Clientid:01:52:54:00:39:3b:0d}
	I1028 12:20:40.366347  185942 main.go:141] libmachine: (embed-certs-709250) DBG | domain embed-certs-709250 has defined IP address 192.168.39.211 and MAC address 52:54:00:39:3b:0d in network mk-embed-certs-709250
	I1028 12:20:40.366659  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHPort
	I1028 12:20:40.366879  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHKeyPath
	I1028 12:20:40.367069  185942 main.go:141] libmachine: (embed-certs-709250) Calling .GetSSHUsername
	I1028 12:20:40.367245  185942 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/embed-certs-709250/id_rsa Username:docker}
	I1028 12:20:40.526205  185942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:20:40.545404  185942 node_ready.go:35] waiting up to 6m0s for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555003  185942 node_ready.go:49] node "embed-certs-709250" has status "Ready":"True"
	I1028 12:20:40.555028  185942 node_ready.go:38] duration metric: took 9.592797ms for node "embed-certs-709250" to be "Ready" ...
	I1028 12:20:40.555047  185942 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:40.564021  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:40.660020  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:20:40.660061  185942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:20:40.666435  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:20:40.691423  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:20:40.692384  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:20:40.692411  185942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:20:40.739518  185942 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:40.739549  185942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:20:40.765228  185942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:20:37.216347  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:39.716471  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.192384  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192422  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192491  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192514  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192740  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192759  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192783  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192791  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.192915  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.192942  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.192951  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.192962  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.193093  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193125  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193131  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.193373  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.193403  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.193409  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.229776  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.229808  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.230111  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.230127  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.624688  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.624714  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625048  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.625055  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625066  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625074  185942 main.go:141] libmachine: Making call to close driver server
	I1028 12:20:41.625081  185942 main.go:141] libmachine: (embed-certs-709250) Calling .Close
	I1028 12:20:41.625283  185942 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:20:41.625312  185942 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:20:41.625325  185942 addons.go:475] Verifying addon metrics-server=true in "embed-certs-709250"
	I1028 12:20:41.625329  185942 main.go:141] libmachine: (embed-certs-709250) DBG | Closing plugin on server side
	I1028 12:20:41.627194  185942 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:20:37.771166  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:40.265616  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.265990  186547 pod_ready.go:103] pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:41.628572  185942 addons.go:510] duration metric: took 1.337655555s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:20:42.572801  185942 pod_ready.go:103] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.571062  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.571095  185942 pod_ready.go:82] duration metric: took 3.007040788s for pod "coredns-7c65d6cfc9-p59fl" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.571110  185942 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576592  185942 pod_ready.go:93] pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:43.576620  185942 pod_ready.go:82] duration metric: took 5.500425ms for pod "coredns-7c65d6cfc9-sx86n" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:43.576633  185942 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:45.583586  185942 pod_ready.go:103] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:42.216524  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:44.715547  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:43.758721  186547 pod_ready.go:82] duration metric: took 4m0.000295852s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" ...
	E1028 12:20:43.758758  186547 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cgkz9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1028 12:20:43.758783  186547 pod_ready.go:39] duration metric: took 4m13.710127509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:43.758811  186547 kubeadm.go:597] duration metric: took 4m21.647032906s to restartPrimaryControlPlane
	W1028 12:20:43.758873  186547 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1028 12:20:43.758910  186547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:20:47.089478  185942 pod_ready.go:93] pod "etcd-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.089502  185942 pod_ready.go:82] duration metric: took 3.512861746s for pod "etcd-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.089512  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094229  185942 pod_ready.go:93] pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.094255  185942 pod_ready.go:82] duration metric: took 4.736326ms for pod "kube-apiserver-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.094267  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098823  185942 pod_ready.go:93] pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.098859  185942 pod_ready.go:82] duration metric: took 4.584003ms for pod "kube-controller-manager-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.098872  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104063  185942 pod_ready.go:93] pod "kube-proxy-gck6r" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.104083  185942 pod_ready.go:82] duration metric: took 5.204526ms for pod "kube-proxy-gck6r" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.104091  185942 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168177  185942 pod_ready.go:93] pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace has status "Ready":"True"
	I1028 12:20:47.168210  185942 pod_ready.go:82] duration metric: took 64.110225ms for pod "kube-scheduler-embed-certs-709250" in "kube-system" namespace to be "Ready" ...
	I1028 12:20:47.168221  185942 pod_ready.go:39] duration metric: took 6.613160968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:20:47.168243  185942 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:20:47.168309  185942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:20:47.186907  185942 api_server.go:72] duration metric: took 6.896070864s to wait for apiserver process to appear ...
	I1028 12:20:47.186944  185942 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:20:47.186998  185942 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1028 12:20:47.191428  185942 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1028 12:20:47.192677  185942 api_server.go:141] control plane version: v1.31.2
	I1028 12:20:47.192708  185942 api_server.go:131] duration metric: took 5.753471ms to wait for apiserver health ...
	I1028 12:20:47.192719  185942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:20:47.372534  185942 system_pods.go:59] 9 kube-system pods found
	I1028 12:20:47.372571  185942 system_pods.go:61] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.372580  185942 system_pods.go:61] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.372585  185942 system_pods.go:61] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.372590  185942 system_pods.go:61] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.372595  185942 system_pods.go:61] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.372599  185942 system_pods.go:61] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.372605  185942 system_pods.go:61] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.372614  185942 system_pods.go:61] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.372620  185942 system_pods.go:61] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.372633  185942 system_pods.go:74] duration metric: took 179.905205ms to wait for pod list to return data ...
	I1028 12:20:47.372647  185942 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:20:47.569853  185942 default_sa.go:45] found service account: "default"
	I1028 12:20:47.569886  185942 default_sa.go:55] duration metric: took 197.228265ms for default service account to be created ...
	I1028 12:20:47.569900  185942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:20:47.770906  185942 system_pods.go:86] 9 kube-system pods found
	I1028 12:20:47.770941  185942 system_pods.go:89] "coredns-7c65d6cfc9-p59fl" [59ad8040-64c4-429c-905e-29f8b65e4477] Running
	I1028 12:20:47.770948  185942 system_pods.go:89] "coredns-7c65d6cfc9-sx86n" [27c1f7ad-7f31-4280-99e3-70594c81237f] Running
	I1028 12:20:47.770953  185942 system_pods.go:89] "etcd-embed-certs-709250" [11645777-a96b-4eb1-a1f1-b1962521c64f] Running
	I1028 12:20:47.770956  185942 system_pods.go:89] "kube-apiserver-embed-certs-709250" [05bac435-26f6-41af-9a9e-800678b05546] Running
	I1028 12:20:47.770960  185942 system_pods.go:89] "kube-controller-manager-embed-certs-709250" [6e43d5f6-0a04-4b52-baca-45af311b7168] Running
	I1028 12:20:47.770964  185942 system_pods.go:89] "kube-proxy-gck6r" [f06472ac-a4c8-4982-822b-29fccd838314] Running
	I1028 12:20:47.770967  185942 system_pods.go:89] "kube-scheduler-embed-certs-709250" [e602a662-33b3-437a-81bd-a3cab1a0c4c5] Running
	I1028 12:20:47.770973  185942 system_pods.go:89] "metrics-server-6867b74b74-wwlqv" [40ea7346-36fe-4d24-b4d3-1d12e1211182] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:20:47.770977  185942 system_pods.go:89] "storage-provisioner" [e6b66608-d85e-4dfd-96ab-a1295165e2f4] Running
	I1028 12:20:47.770984  185942 system_pods.go:126] duration metric: took 201.078078ms to wait for k8s-apps to be running ...
	I1028 12:20:47.770990  185942 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:20:47.771033  185942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:20:47.787139  185942 system_svc.go:56] duration metric: took 16.13776ms WaitForService to wait for kubelet
	I1028 12:20:47.787171  185942 kubeadm.go:582] duration metric: took 7.496343244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:20:47.787191  185942 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:20:47.969485  185942 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:20:47.969516  185942 node_conditions.go:123] node cpu capacity is 2
	I1028 12:20:47.969547  185942 node_conditions.go:105] duration metric: took 182.350787ms to run NodePressure ...
	I1028 12:20:47.969562  185942 start.go:241] waiting for startup goroutines ...
	I1028 12:20:47.969572  185942 start.go:246] waiting for cluster config update ...
	I1028 12:20:47.969586  185942 start.go:255] writing updated cluster config ...
	I1028 12:20:47.969916  185942 ssh_runner.go:195] Run: rm -f paused
	I1028 12:20:48.021806  185942 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:20:48.023816  185942 out.go:177] * Done! kubectl is now configured to use "embed-certs-709250" cluster and "default" namespace by default
	I1028 12:20:46.716844  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:49.216673  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:51.715101  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:53.715509  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:56.217201  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:20:58.715405  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:00.715890  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:03.214669  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:05.215054  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.108895  186547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.349960271s)
	I1028 12:21:10.108979  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:10.126064  186547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1028 12:21:10.139862  186547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:21:10.150752  186547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:21:10.150780  186547 kubeadm.go:157] found existing configuration files:
	
	I1028 12:21:10.150837  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1028 12:21:10.161522  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:21:10.161604  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:21:10.172230  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1028 12:21:10.183231  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:21:10.183299  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:21:10.194261  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.204462  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:21:10.204524  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:21:10.214991  186547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1028 12:21:10.225246  186547 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:21:10.225315  186547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:21:10.235439  186547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:21:10.280951  186547 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1028 12:21:10.281020  186547 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:21:10.391997  186547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:21:10.392163  186547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:21:10.392297  186547 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1028 12:21:10.402113  186547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:21:07.217549  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:09.716985  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:10.404087  186547 out.go:235]   - Generating certificates and keys ...
	I1028 12:21:10.404194  186547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:21:10.404312  186547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:21:10.404441  186547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:21:10.404537  186547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:21:10.404642  186547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:21:10.404719  186547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:21:10.404824  186547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:21:10.404914  186547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:21:10.405021  186547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:21:10.405124  186547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:21:10.405185  186547 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:21:10.405269  186547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:21:10.608657  186547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:21:10.910608  186547 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1028 12:21:11.076768  186547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:21:11.244109  186547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:21:11.685910  186547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:21:11.686470  186547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:21:11.692266  186547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:21:11.694100  186547 out.go:235]   - Booting up control plane ...
	I1028 12:21:11.694231  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:21:11.694377  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:21:11.694607  186547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:21:11.713908  186547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:21:11.720788  186547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:21:11.720874  186547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:21:11.856867  186547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1028 12:21:11.856998  186547 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1028 12:21:12.358968  186547 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.942759ms
	I1028 12:21:12.359067  186547 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1028 12:21:12.215062  185546 pod_ready.go:103] pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:14.208408  185546 pod_ready.go:82] duration metric: took 4m0.000135609s for pod "metrics-server-6867b74b74-xr9lt" in "kube-system" namespace to be "Ready" ...
	E1028 12:21:14.208447  185546 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1028 12:21:14.208457  185546 pod_ready.go:39] duration metric: took 4m3.200735753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:14.208485  185546 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:14.208519  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:14.208571  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:14.266154  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.266184  185546 cri.go:89] found id: ""
	I1028 12:21:14.266196  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:14.266255  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.271416  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:14.271497  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:14.310426  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.310457  185546 cri.go:89] found id: ""
	I1028 12:21:14.310467  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:14.310529  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.314961  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:14.315037  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:14.362502  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.362530  185546 cri.go:89] found id: ""
	I1028 12:21:14.362540  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:14.362602  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.368118  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:14.368198  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:14.416827  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.416867  185546 cri.go:89] found id: ""
	I1028 12:21:14.416877  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:14.416943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.421640  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:14.421716  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:14.473506  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:14.473552  185546 cri.go:89] found id: ""
	I1028 12:21:14.473563  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:14.473627  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.480106  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:14.480183  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:14.529939  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:14.529964  185546 cri.go:89] found id: ""
	I1028 12:21:14.529971  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:14.530120  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.536199  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:14.536264  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:14.578374  185546 cri.go:89] found id: ""
	I1028 12:21:14.578407  185546 logs.go:282] 0 containers: []
	W1028 12:21:14.578419  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:14.578428  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:14.578490  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:14.620216  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:14.620243  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:14.620249  185546 cri.go:89] found id: ""
	I1028 12:21:14.620258  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:14.620323  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.625798  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:14.630653  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:14.630683  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:14.645364  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:14.645404  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:14.686202  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:14.686234  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:14.730094  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:14.730125  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:14.786272  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:14.786322  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:14.875705  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:14.875746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:14.931913  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:14.931960  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:14.991914  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:14.991953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:15.037022  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:15.037056  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:15.107597  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:15.107649  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:15.161401  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:15.161442  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:15.201916  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:15.201953  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:15.682647  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:15.682694  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:17.861193  186547 kubeadm.go:310] [api-check] The API server is healthy after 5.502448006s
	I1028 12:21:17.874856  186547 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1028 12:21:17.889216  186547 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1028 12:21:17.933411  186547 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1028 12:21:17.933726  186547 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-349222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1028 12:21:17.964667  186547 kubeadm.go:310] [bootstrap-token] Using token: o3vo7c.1x7759cggrb8kl7r
	I1028 12:21:17.966405  186547 out.go:235]   - Configuring RBAC rules ...
	I1028 12:21:17.966590  186547 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1028 12:21:17.982231  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1028 12:21:17.991850  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1028 12:21:17.996073  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1028 12:21:18.003531  186547 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1028 12:21:18.008369  186547 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1028 12:21:18.272751  186547 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1028 12:21:18.724493  186547 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1028 12:21:19.269583  186547 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1028 12:21:19.270654  186547 kubeadm.go:310] 
	I1028 12:21:19.270715  186547 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1028 12:21:19.270722  186547 kubeadm.go:310] 
	I1028 12:21:19.270782  186547 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1028 12:21:19.270787  186547 kubeadm.go:310] 
	I1028 12:21:19.270816  186547 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1028 12:21:19.270875  186547 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1028 12:21:19.270938  186547 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1028 12:21:19.270949  186547 kubeadm.go:310] 
	I1028 12:21:19.271022  186547 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1028 12:21:19.271063  186547 kubeadm.go:310] 
	I1028 12:21:19.271165  186547 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1028 12:21:19.271190  186547 kubeadm.go:310] 
	I1028 12:21:19.271266  186547 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1028 12:21:19.271380  186547 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1028 12:21:19.271470  186547 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1028 12:21:19.271479  186547 kubeadm.go:310] 
	I1028 12:21:19.271600  186547 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1028 12:21:19.271697  186547 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1028 12:21:19.271709  186547 kubeadm.go:310] 
	I1028 12:21:19.271838  186547 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272010  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b \
	I1028 12:21:19.272068  186547 kubeadm.go:310] 	--control-plane 
	I1028 12:21:19.272079  186547 kubeadm.go:310] 
	I1028 12:21:19.272250  186547 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1028 12:21:19.272270  186547 kubeadm.go:310] 
	I1028 12:21:19.272391  186547 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token o3vo7c.1x7759cggrb8kl7r \
	I1028 12:21:19.272568  186547 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d1fdf748a1f4d8b286cfc9c270639240f81c04a894ce994fd4e984a90f3d23b 
	I1028 12:21:19.273899  186547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:21:19.273955  186547 cni.go:84] Creating CNI manager for ""
	I1028 12:21:19.273977  186547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 12:21:19.275868  186547 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1028 12:21:18.355132  185546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:18.373260  185546 api_server.go:72] duration metric: took 4m14.615888944s to wait for apiserver process to appear ...
	I1028 12:21:18.373292  185546 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:18.373353  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:18.373410  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:18.413207  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.413239  185546 cri.go:89] found id: ""
	I1028 12:21:18.413250  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:18.413336  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.419588  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:18.419655  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:18.476341  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.476373  185546 cri.go:89] found id: ""
	I1028 12:21:18.476383  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:18.476450  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.482835  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:18.482926  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:18.524934  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.524964  185546 cri.go:89] found id: ""
	I1028 12:21:18.524975  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:18.525040  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.530198  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:18.530284  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:18.577310  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:18.577338  185546 cri.go:89] found id: ""
	I1028 12:21:18.577349  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:18.577413  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.583048  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:18.583133  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:18.622556  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:18.622587  185546 cri.go:89] found id: ""
	I1028 12:21:18.622598  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:18.622701  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.628450  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:18.628540  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:18.674827  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:18.674861  185546 cri.go:89] found id: ""
	I1028 12:21:18.674873  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:18.674943  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.680282  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:18.680354  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:18.738014  185546 cri.go:89] found id: ""
	I1028 12:21:18.738044  185546 logs.go:282] 0 containers: []
	W1028 12:21:18.738061  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:18.738070  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:18.738142  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:18.780615  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:18.780645  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:18.780651  185546 cri.go:89] found id: ""
	I1028 12:21:18.780660  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:18.780725  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.786003  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:18.790208  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:18.790231  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:18.806481  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:18.806523  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:18.853343  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:18.853382  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:18.906386  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:18.906424  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:18.948149  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:18.948182  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:19.000642  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:19.000678  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:19.038715  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:19.038744  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:19.079234  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:19.079271  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:19.147309  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:19.147349  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:19.271582  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:19.271620  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:19.319149  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:19.319195  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:19.385399  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:19.385437  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:19.811993  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:19.812035  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:19.277402  186547 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1028 12:21:19.296307  186547 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1028 12:21:19.323315  186547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:19.323370  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-349222 minikube.k8s.io/updated_at=2024_10_28T12_21_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536 minikube.k8s.io/name=default-k8s-diff-port-349222 minikube.k8s.io/primary=true
	I1028 12:21:19.550855  186547 ops.go:34] apiserver oom_adj: -16
	I1028 12:21:19.550882  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.051004  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:20.551001  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.051215  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:21.551283  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.050989  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:22.551423  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.051101  186547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1028 12:21:23.151453  186547 kubeadm.go:1113] duration metric: took 3.828156807s to wait for elevateKubeSystemPrivileges
	I1028 12:21:23.151505  186547 kubeadm.go:394] duration metric: took 5m1.103220882s to StartCluster
	I1028 12:21:23.151530  186547 settings.go:142] acquiring lock: {Name:mk15916e78356764f7730533a8b9145b395656e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.151623  186547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:21:23.153557  186547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-132631/kubeconfig: {Name:mk0ceec8255056cdbee043b6b17ae12ad978b532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1028 12:21:23.153874  186547 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.75 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1028 12:21:23.153996  186547 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1028 12:21:23.154101  186547 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154122  186547 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154133  186547 addons.go:243] addon storage-provisioner should already be in state true
	I1028 12:21:23.154128  186547 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154165  186547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-349222"
	I1028 12:21:23.154160  186547 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-349222"
	I1028 12:21:23.154197  186547 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.154213  186547 addons.go:243] addon metrics-server should already be in state true
	I1028 12:21:23.154167  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154254  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.154664  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154679  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154749  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154135  186547 config.go:182] Loaded profile config "default-k8s-diff-port-349222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:21:23.154803  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.154844  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.154948  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.155649  186547 out.go:177] * Verifying Kubernetes components...
	I1028 12:21:23.157234  186547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1028 12:21:23.172278  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I1028 12:21:23.172870  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.173402  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.173429  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.173851  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.174056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.176299  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1028 12:21:23.176307  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I1028 12:21:23.176897  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177023  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.177553  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177576  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177589  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.177606  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.177887  186547 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-349222"
	W1028 12:21:23.177912  186547 addons.go:243] addon default-storageclass should already be in state true
	I1028 12:21:23.177945  186547 host.go:66] Checking if "default-k8s-diff-port-349222" exists ...
	I1028 12:21:23.177971  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178030  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.178369  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178404  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178541  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.178572  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.178961  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.179002  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.196089  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1028 12:21:23.197979  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.198578  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.198607  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.199082  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.199301  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.199604  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1028 12:21:23.200120  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.200519  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.200539  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.200938  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.201204  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.201711  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.201794  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I1028 12:21:23.202225  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.202937  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.202956  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.203305  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.203753  186547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 12:21:23.203791  186547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 12:21:23.204026  186547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1028 12:21:23.204210  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.205470  186547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1028 12:21:23.205490  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1028 12:21:23.205554  186547 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1028 12:21:23.205576  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.207334  186547 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.207352  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1028 12:21:23.207372  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.209573  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210195  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.210230  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.210366  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.210608  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.210806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.211061  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.211884  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.211910  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.211928  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.212104  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.212351  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.212570  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.212762  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.231664  186547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I1028 12:21:23.232283  186547 main.go:141] libmachine: () Calling .GetVersion
	I1028 12:21:23.232904  186547 main.go:141] libmachine: Using API Version  1
	I1028 12:21:23.232929  186547 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 12:21:23.233414  186547 main.go:141] libmachine: () Calling .GetMachineName
	I1028 12:21:23.233658  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetState
	I1028 12:21:23.236162  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .DriverName
	I1028 12:21:23.236665  186547 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.236680  186547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1028 12:21:23.236700  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHHostname
	I1028 12:21:23.240368  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240675  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:bc:cf", ip: ""} in network mk-default-k8s-diff-port-349222: {Iface:virbr2 ExpiryTime:2024-10-28 13:16:06 +0000 UTC Type:0 Mac:52:54:00:90:bc:cf Iaid: IPaddr:192.168.50.75 Prefix:24 Hostname:default-k8s-diff-port-349222 Clientid:01:52:54:00:90:bc:cf}
	I1028 12:21:23.240697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | domain default-k8s-diff-port-349222 has defined IP address 192.168.50.75 and MAC address 52:54:00:90:bc:cf in network mk-default-k8s-diff-port-349222
	I1028 12:21:23.240848  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHPort
	I1028 12:21:23.241034  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHKeyPath
	I1028 12:21:23.241156  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .GetSSHUsername
	I1028 12:21:23.241281  186547 sshutil.go:53] new ssh client: &{IP:192.168.50.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/default-k8s-diff-port-349222/id_rsa Username:docker}
	I1028 12:21:23.409461  186547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1028 12:21:23.430686  186547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442439  186547 node_ready.go:49] node "default-k8s-diff-port-349222" has status "Ready":"True"
	I1028 12:21:23.442466  186547 node_ready.go:38] duration metric: took 11.749381ms for node "default-k8s-diff-port-349222" to be "Ready" ...
	I1028 12:21:23.442480  186547 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:23.447741  186547 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:23.515393  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1028 12:21:23.545556  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1028 12:21:23.575253  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1028 12:21:23.575280  186547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1028 12:21:23.663892  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1028 12:21:23.663920  186547 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1028 12:21:23.745621  186547 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:23.745656  186547 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1028 12:21:23.823360  186547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1028 12:21:24.391754  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391789  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.391789  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.391806  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.392092  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.392112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.392123  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.392130  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393697  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393716  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393697  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.393725  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.393733  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.393810  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.393828  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.393886  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394056  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.394088  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.394112  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.413957  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.414000  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.414363  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.414385  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853053  186547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.029641945s)
	I1028 12:21:24.853107  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853123  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853434  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) DBG | Closing plugin on server side
	I1028 12:21:24.853492  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853501  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853518  186547 main.go:141] libmachine: Making call to close driver server
	I1028 12:21:24.853543  186547 main.go:141] libmachine: (default-k8s-diff-port-349222) Calling .Close
	I1028 12:21:24.853784  186547 main.go:141] libmachine: Successfully made call to close driver server
	I1028 12:21:24.853801  186547 main.go:141] libmachine: Making call to close connection to plugin binary
	I1028 12:21:24.853813  186547 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-349222"
	I1028 12:21:24.855707  186547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1028 12:21:22.373623  185546 api_server.go:253] Checking apiserver healthz at https://192.168.72.156:8443/healthz ...
	I1028 12:21:22.379559  185546 api_server.go:279] https://192.168.72.156:8443/healthz returned 200:
	ok
	I1028 12:21:22.380750  185546 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:22.380772  185546 api_server.go:131] duration metric: took 4.007460794s to wait for apiserver health ...
	I1028 12:21:22.380783  185546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:22.380811  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:21:22.380875  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:21:22.426678  185546 cri.go:89] found id: "6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:22.426710  185546 cri.go:89] found id: ""
	I1028 12:21:22.426720  185546 logs.go:282] 1 containers: [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221]
	I1028 12:21:22.426781  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.431942  185546 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:21:22.432014  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:21:22.472504  185546 cri.go:89] found id: "d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:22.472531  185546 cri.go:89] found id: ""
	I1028 12:21:22.472540  185546 logs.go:282] 1 containers: [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7]
	I1028 12:21:22.472595  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.478446  185546 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:21:22.478511  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:21:22.520149  185546 cri.go:89] found id: "9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.520169  185546 cri.go:89] found id: ""
	I1028 12:21:22.520177  185546 logs.go:282] 1 containers: [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71]
	I1028 12:21:22.520235  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.525716  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:21:22.525804  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:21:22.564801  185546 cri.go:89] found id: "9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:22.564832  185546 cri.go:89] found id: ""
	I1028 12:21:22.564844  185546 logs.go:282] 1 containers: [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a]
	I1028 12:21:22.564909  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.570065  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:21:22.570147  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:21:22.613601  185546 cri.go:89] found id: "1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.613628  185546 cri.go:89] found id: ""
	I1028 12:21:22.613637  185546 logs.go:282] 1 containers: [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0]
	I1028 12:21:22.613700  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.618413  185546 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:21:22.618483  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:21:22.664329  185546 cri.go:89] found id: "16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.664358  185546 cri.go:89] found id: ""
	I1028 12:21:22.664369  185546 logs.go:282] 1 containers: [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b]
	I1028 12:21:22.664430  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.669013  185546 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:21:22.669084  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:21:22.706046  185546 cri.go:89] found id: ""
	I1028 12:21:22.706074  185546 logs.go:282] 0 containers: []
	W1028 12:21:22.706084  185546 logs.go:284] No container was found matching "kindnet"
	I1028 12:21:22.706091  185546 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1028 12:21:22.706159  185546 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1028 12:21:22.747718  185546 cri.go:89] found id: "8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.747744  185546 cri.go:89] found id: "3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.747750  185546 cri.go:89] found id: ""
	I1028 12:21:22.747759  185546 logs.go:282] 2 containers: [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1]
	I1028 12:21:22.747825  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.752857  185546 ssh_runner.go:195] Run: which crictl
	I1028 12:21:22.758383  185546 logs.go:123] Gathering logs for kube-proxy [1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0] ...
	I1028 12:21:22.758410  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1edb7fc86811adf3c52cbd3c6c1b74ec232391b5e6f0755b9d69619da85ad9a0"
	I1028 12:21:22.800846  185546 logs.go:123] Gathering logs for kube-controller-manager [16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b] ...
	I1028 12:21:22.800882  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a1ce9b3f38fc1ff36ae658ef5a05802d4d72f9461cd70e3e83b3b7ab6c762b"
	I1028 12:21:22.858663  185546 logs.go:123] Gathering logs for storage-provisioner [8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945] ...
	I1028 12:21:22.858702  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be2c80f222fcf38631919e4117d0d4b493a23c4d49812e416f296b3288f5945"
	I1028 12:21:22.896915  185546 logs.go:123] Gathering logs for storage-provisioner [3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1] ...
	I1028 12:21:22.896959  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3576b8af85140c55c6f165d701d2a4f8047364e76c5be98b27f294b4462b98c1"
	I1028 12:21:22.938476  185546 logs.go:123] Gathering logs for coredns [9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71] ...
	I1028 12:21:22.938503  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a21fcd9e6d821f0cf41bb2e7764a587a09a6232019a2539248619272daf3a71"
	I1028 12:21:22.984601  185546 logs.go:123] Gathering logs for dmesg ...
	I1028 12:21:22.984628  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:21:23.000223  185546 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:21:23.000259  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1028 12:21:23.130709  185546 logs.go:123] Gathering logs for kube-apiserver [6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221] ...
	I1028 12:21:23.130746  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d5abde055384092406d5bfc30b0ea87096ec45df8e072c075d120cc00392221"
	I1028 12:21:23.189821  185546 logs.go:123] Gathering logs for etcd [d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7] ...
	I1028 12:21:23.189859  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d66cdd02dd21114c1dad45f32bce8111b2ff748a3b9cbb7da204467934cdb2a7"
	I1028 12:21:23.244463  185546 logs.go:123] Gathering logs for kube-scheduler [9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a] ...
	I1028 12:21:23.244535  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9473dbbdab67217df649e20e0ec3fcb36df38a4430031e2ee0d737295a84286a"
	I1028 12:21:23.299279  185546 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:21:23.299318  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:21:23.714691  185546 logs.go:123] Gathering logs for container status ...
	I1028 12:21:23.714730  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:21:23.777703  185546 logs.go:123] Gathering logs for kubelet ...
	I1028 12:21:23.777749  185546 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1028 12:21:26.364133  185546 system_pods.go:59] 8 kube-system pods found
	I1028 12:21:26.364166  185546 system_pods.go:61] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.364171  185546 system_pods.go:61] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.364175  185546 system_pods.go:61] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.364179  185546 system_pods.go:61] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.364182  185546 system_pods.go:61] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.364185  185546 system_pods.go:61] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.364191  185546 system_pods.go:61] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.364195  185546 system_pods.go:61] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.364201  185546 system_pods.go:74] duration metric: took 3.98341316s to wait for pod list to return data ...
	I1028 12:21:26.364209  185546 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:26.366899  185546 default_sa.go:45] found service account: "default"
	I1028 12:21:26.366925  185546 default_sa.go:55] duration metric: took 2.710943ms for default service account to be created ...
	I1028 12:21:26.366934  185546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:26.371193  185546 system_pods.go:86] 8 kube-system pods found
	I1028 12:21:26.371219  185546 system_pods.go:89] "coredns-7c65d6cfc9-dg2jd" [88811f8d-8c45-4ef1-bbf1-8ca151e23d9a] Running
	I1028 12:21:26.371224  185546 system_pods.go:89] "etcd-no-preload-871884" [880822ce-34f8-4044-b61d-bbaf7c7b0243] Running
	I1028 12:21:26.371228  185546 system_pods.go:89] "kube-apiserver-no-preload-871884" [7d671d8f-bb7f-4a2a-b180-d578e69dc9ed] Running
	I1028 12:21:26.371233  185546 system_pods.go:89] "kube-controller-manager-no-preload-871884" [f71c5ba0-bfd3-46e6-8db9-e067a72ee4fa] Running
	I1028 12:21:26.371237  185546 system_pods.go:89] "kube-proxy-6rc4l" [92def3e4-45f2-4daa-bd07-5366d364a070] Running
	I1028 12:21:26.371240  185546 system_pods.go:89] "kube-scheduler-no-preload-871884" [f6e41e92-9314-4fa5-9104-bcf3373c7b26] Running
	I1028 12:21:26.371246  185546 system_pods.go:89] "metrics-server-6867b74b74-xr9lt" [62926d83-9891-4dec-b0ed-a1fa87e0dd28] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:26.371250  185546 system_pods.go:89] "storage-provisioner" [c258c3a3-c7aa-476f-9802-a3e6accd6c7c] Running
	I1028 12:21:26.371257  185546 system_pods.go:126] duration metric: took 4.318058ms to wait for k8s-apps to be running ...
	I1028 12:21:26.371265  185546 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:26.371317  185546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:26.389093  185546 system_svc.go:56] duration metric: took 17.81758ms WaitForService to wait for kubelet
	I1028 12:21:26.389131  185546 kubeadm.go:582] duration metric: took 4m22.631766189s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:26.389158  185546 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:26.392700  185546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:26.392728  185546 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:26.392741  185546 node_conditions.go:105] duration metric: took 3.576663ms to run NodePressure ...
	I1028 12:21:26.392757  185546 start.go:241] waiting for startup goroutines ...
	I1028 12:21:26.392766  185546 start.go:246] waiting for cluster config update ...
	I1028 12:21:26.392781  185546 start.go:255] writing updated cluster config ...
	I1028 12:21:26.393086  185546 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:26.444274  185546 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:26.446322  185546 out.go:177] * Done! kubectl is now configured to use "no-preload-871884" cluster and "default" namespace by default
	I1028 12:21:24.856866  186547 addons.go:510] duration metric: took 1.702877543s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1028 12:21:25.462800  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:27.954511  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:30.454530  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.455161  186547 pod_ready.go:103] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"False"
	I1028 12:21:32.955218  186547 pod_ready.go:93] pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.955242  186547 pod_ready.go:82] duration metric: took 9.507473956s for pod "etcd-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.955253  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.960990  186547 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.961018  186547 pod_ready.go:82] duration metric: took 5.757431ms for pod "kube-apiserver-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.961032  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966957  186547 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.966981  186547 pod_ready.go:82] duration metric: took 5.940549ms for pod "kube-controller-manager-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.966991  186547 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972168  186547 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace has status "Ready":"True"
	I1028 12:21:32.972194  186547 pod_ready.go:82] duration metric: took 5.195057ms for pod "kube-scheduler-default-k8s-diff-port-349222" in "kube-system" namespace to be "Ready" ...
	I1028 12:21:32.972205  186547 pod_ready.go:39] duration metric: took 9.529713389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1028 12:21:32.972224  186547 api_server.go:52] waiting for apiserver process to appear ...
	I1028 12:21:32.972294  186547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 12:21:32.988675  186547 api_server.go:72] duration metric: took 9.83476496s to wait for apiserver process to appear ...
	I1028 12:21:32.988711  186547 api_server.go:88] waiting for apiserver healthz status ...
	I1028 12:21:32.988736  186547 api_server.go:253] Checking apiserver healthz at https://192.168.50.75:8444/healthz ...
	I1028 12:21:32.993068  186547 api_server.go:279] https://192.168.50.75:8444/healthz returned 200:
	ok
	I1028 12:21:32.994352  186547 api_server.go:141] control plane version: v1.31.2
	I1028 12:21:32.994377  186547 api_server.go:131] duration metric: took 5.656136ms to wait for apiserver health ...
	I1028 12:21:32.994387  186547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1028 12:21:32.999982  186547 system_pods.go:59] 9 kube-system pods found
	I1028 12:21:33.000010  186547 system_pods.go:61] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.000017  186547 system_pods.go:61] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.000024  186547 system_pods.go:61] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.000029  186547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.000033  186547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.000037  186547 system_pods.go:61] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.000040  186547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.000046  186547 system_pods.go:61] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.000051  186547 system_pods.go:61] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.000064  186547 system_pods.go:74] duration metric: took 5.66991ms to wait for pod list to return data ...
	I1028 12:21:33.000075  186547 default_sa.go:34] waiting for default service account to be created ...
	I1028 12:21:33.003124  186547 default_sa.go:45] found service account: "default"
	I1028 12:21:33.003149  186547 default_sa.go:55] duration metric: took 3.067652ms for default service account to be created ...
	I1028 12:21:33.003159  186547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1028 12:21:33.155864  186547 system_pods.go:86] 9 kube-system pods found
	I1028 12:21:33.155902  186547 system_pods.go:89] "coredns-7c65d6cfc9-nkcb7" [0531b433-940f-4d3d-aae4-9fe5a1b96815] Running
	I1028 12:21:33.155914  186547 system_pods.go:89] "coredns-7c65d6cfc9-rxfxk" [b917b614-94ef-4c38-a1f4-60422af4bb73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1028 12:21:33.155921  186547 system_pods.go:89] "etcd-default-k8s-diff-port-349222" [85a5dcd8-bfac-4090-9427-9816f06f6e86] Running
	I1028 12:21:33.155931  186547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-349222" [cc53ca94-0d24-4b47-8cf1-c0aa21355816] Running
	I1028 12:21:33.155938  186547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-349222" [28004168-1421-4109-b9ba-b967544a5029] Running
	I1028 12:21:33.155943  186547 system_pods.go:89] "kube-proxy-6krbc" [eab77549-1b29-4a66-b284-d63774357f88] Running
	I1028 12:21:33.155948  186547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-349222" [95ee9d74-407b-4b51-8c3d-10df372b9b6c] Running
	I1028 12:21:33.155956  186547 system_pods.go:89] "metrics-server-6867b74b74-4xgsk" [d9428c22-0c65-4809-a647-8a4c3737f67d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1028 12:21:33.155965  186547 system_pods.go:89] "storage-provisioner" [5b672315-a64e-4222-b07a-3a76050a3b67] Running
	I1028 12:21:33.155977  186547 system_pods.go:126] duration metric: took 152.809784ms to wait for k8s-apps to be running ...
	I1028 12:21:33.155991  186547 system_svc.go:44] waiting for kubelet service to be running ....
	I1028 12:21:33.156049  186547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:21:33.171592  186547 system_svc.go:56] duration metric: took 15.589436ms WaitForService to wait for kubelet
	I1028 12:21:33.171647  186547 kubeadm.go:582] duration metric: took 10.017726239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1028 12:21:33.171672  186547 node_conditions.go:102] verifying NodePressure condition ...
	I1028 12:21:33.352932  186547 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1028 12:21:33.352984  186547 node_conditions.go:123] node cpu capacity is 2
	I1028 12:21:33.352995  186547 node_conditions.go:105] duration metric: took 181.317488ms to run NodePressure ...
	I1028 12:21:33.353006  186547 start.go:241] waiting for startup goroutines ...
	I1028 12:21:33.353014  186547 start.go:246] waiting for cluster config update ...
	I1028 12:21:33.353024  186547 start.go:255] writing updated cluster config ...
	I1028 12:21:33.353314  186547 ssh_runner.go:195] Run: rm -f paused
	I1028 12:21:33.405276  186547 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1028 12:21:33.407589  186547 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-349222" cluster and "default" namespace by default
	I1028 12:22:04.038479  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:22:04.038595  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:22:04.040170  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.040244  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.040356  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.040466  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.040579  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:04.040700  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:04.042557  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:04.042662  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:04.042757  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:04.042877  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:04.042984  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:04.043096  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:04.043158  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:04.043247  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:04.043341  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:04.043442  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:04.043558  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:04.043622  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:04.043675  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:04.043718  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:04.043768  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:04.043825  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:04.043871  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:04.044021  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:04.044164  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:04.044224  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:04.044332  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:04.046085  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:04.046237  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:04.046370  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:04.046463  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:04.046544  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:04.046679  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:04.046728  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:04.046786  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.046976  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047099  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047318  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047393  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047554  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047611  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.047799  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.047892  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:04.048151  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:04.048167  186170 kubeadm.go:310] 
	I1028 12:22:04.048208  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:22:04.048252  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:22:04.048262  186170 kubeadm.go:310] 
	I1028 12:22:04.048317  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:22:04.048363  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:22:04.048453  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:22:04.048464  186170 kubeadm.go:310] 
	I1028 12:22:04.048557  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:22:04.048604  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:22:04.048658  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:22:04.048672  186170 kubeadm.go:310] 
	I1028 12:22:04.048789  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:22:04.048872  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:22:04.048879  186170 kubeadm.go:310] 
	I1028 12:22:04.049027  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:22:04.049143  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:22:04.049246  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:22:04.049347  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:22:04.049428  186170 kubeadm.go:310] 
	W1028 12:22:04.049541  186170 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1028 12:22:04.049593  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1028 12:22:04.555608  186170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 12:22:04.571673  186170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1028 12:22:04.583645  186170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1028 12:22:04.583667  186170 kubeadm.go:157] found existing configuration files:
	
	I1028 12:22:04.583708  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1028 12:22:04.594436  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1028 12:22:04.594497  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1028 12:22:04.605784  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1028 12:22:04.616699  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1028 12:22:04.616781  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1028 12:22:04.628581  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.639511  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1028 12:22:04.639608  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1028 12:22:04.650503  186170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1028 12:22:04.662383  186170 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1028 12:22:04.662445  186170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1028 12:22:04.673286  186170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1028 12:22:04.755504  186170 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1028 12:22:04.755597  186170 kubeadm.go:310] [preflight] Running pre-flight checks
	I1028 12:22:04.903636  186170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1028 12:22:04.903808  186170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1028 12:22:04.903902  186170 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1028 12:22:05.095520  186170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1028 12:22:05.097710  186170 out.go:235]   - Generating certificates and keys ...
	I1028 12:22:05.097850  186170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1028 12:22:05.097937  186170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1028 12:22:05.098061  186170 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1028 12:22:05.098152  186170 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1028 12:22:05.098252  186170 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1028 12:22:05.098346  186170 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1028 12:22:05.098440  186170 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1028 12:22:05.098905  186170 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1028 12:22:05.099253  186170 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1028 12:22:05.099726  186170 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1028 12:22:05.099786  186170 kubeadm.go:310] [certs] Using the existing "sa" key
	I1028 12:22:05.099872  186170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1028 12:22:05.357781  186170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1028 12:22:05.538771  186170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1028 12:22:05.744145  186170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1028 12:22:06.074866  186170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1028 12:22:06.090636  186170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1028 12:22:06.091772  186170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1028 12:22:06.091863  186170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1028 12:22:06.255534  186170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1028 12:22:06.257598  186170 out.go:235]   - Booting up control plane ...
	I1028 12:22:06.257740  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1028 12:22:06.264309  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1028 12:22:06.266553  186170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1028 12:22:06.266699  186170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1028 12:22:06.268340  186170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1028 12:22:46.271413  186170 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1028 12:22:46.271550  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:46.271812  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:22:51.271863  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:22:51.272118  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:01.272732  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:01.272940  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:23:21.273621  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:23:21.273888  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.272718  186170 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1028 12:24:01.273041  186170 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1028 12:24:01.273073  186170 kubeadm.go:310] 
	I1028 12:24:01.273126  186170 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1028 12:24:01.273220  186170 kubeadm.go:310] 		timed out waiting for the condition
	I1028 12:24:01.273249  186170 kubeadm.go:310] 
	I1028 12:24:01.273303  186170 kubeadm.go:310] 	This error is likely caused by:
	I1028 12:24:01.273375  186170 kubeadm.go:310] 		- The kubelet is not running
	I1028 12:24:01.273508  186170 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1028 12:24:01.273520  186170 kubeadm.go:310] 
	I1028 12:24:01.273665  186170 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1028 12:24:01.273717  186170 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1028 12:24:01.273760  186170 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1028 12:24:01.273770  186170 kubeadm.go:310] 
	I1028 12:24:01.273900  186170 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1028 12:24:01.273966  186170 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1028 12:24:01.273972  186170 kubeadm.go:310] 
	I1028 12:24:01.274090  186170 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1028 12:24:01.274165  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1028 12:24:01.274233  186170 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1028 12:24:01.274294  186170 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1028 12:24:01.274302  186170 kubeadm.go:310] 
	I1028 12:24:01.275128  186170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1028 12:24:01.275221  186170 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1028 12:24:01.275324  186170 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1028 12:24:01.275400  186170 kubeadm.go:394] duration metric: took 7m59.062813621s to StartCluster
	I1028 12:24:01.275480  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1028 12:24:01.275551  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1028 12:24:01.326735  186170 cri.go:89] found id: ""
	I1028 12:24:01.326760  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.326767  186170 logs.go:284] No container was found matching "kube-apiserver"
	I1028 12:24:01.326774  186170 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1028 12:24:01.326822  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1028 12:24:01.368065  186170 cri.go:89] found id: ""
	I1028 12:24:01.368094  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.368103  186170 logs.go:284] No container was found matching "etcd"
	I1028 12:24:01.368109  186170 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1028 12:24:01.368162  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1028 12:24:01.410391  186170 cri.go:89] found id: ""
	I1028 12:24:01.410425  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.410437  186170 logs.go:284] No container was found matching "coredns"
	I1028 12:24:01.410446  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1028 12:24:01.410515  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1028 12:24:01.453290  186170 cri.go:89] found id: ""
	I1028 12:24:01.453332  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.453343  186170 logs.go:284] No container was found matching "kube-scheduler"
	I1028 12:24:01.453361  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1028 12:24:01.453422  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1028 12:24:01.490513  186170 cri.go:89] found id: ""
	I1028 12:24:01.490540  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.490547  186170 logs.go:284] No container was found matching "kube-proxy"
	I1028 12:24:01.490553  186170 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1028 12:24:01.490600  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1028 12:24:01.528320  186170 cri.go:89] found id: ""
	I1028 12:24:01.528350  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.528361  186170 logs.go:284] No container was found matching "kube-controller-manager"
	I1028 12:24:01.528369  186170 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1028 12:24:01.528430  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1028 12:24:01.566998  186170 cri.go:89] found id: ""
	I1028 12:24:01.567030  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.567041  186170 logs.go:284] No container was found matching "kindnet"
	I1028 12:24:01.567050  186170 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1028 12:24:01.567113  186170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1028 12:24:01.600946  186170 cri.go:89] found id: ""
	I1028 12:24:01.600973  186170 logs.go:282] 0 containers: []
	W1028 12:24:01.600983  186170 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1028 12:24:01.600997  186170 logs.go:123] Gathering logs for dmesg ...
	I1028 12:24:01.601018  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1028 12:24:01.615132  186170 logs.go:123] Gathering logs for describe nodes ...
	I1028 12:24:01.615161  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1028 12:24:01.737336  186170 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1028 12:24:01.737371  186170 logs.go:123] Gathering logs for CRI-O ...
	I1028 12:24:01.737387  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1028 12:24:01.862216  186170 logs.go:123] Gathering logs for container status ...
	I1028 12:24:01.862257  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1028 12:24:01.906635  186170 logs.go:123] Gathering logs for kubelet ...
	I1028 12:24:01.906666  186170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1028 12:24:01.959555  186170 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1028 12:24:01.959629  186170 out.go:270] * 
	W1028 12:24:01.959691  186170 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.959706  186170 out.go:270] * 
	W1028 12:24:01.960513  186170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1028 12:24:01.963818  186170 out.go:201] 
	W1028 12:24:01.965768  186170 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1028 12:24:01.965852  186170 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1028 12:24:01.965874  186170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1028 12:24:01.967350  186170 out.go:201] 
	
	
	==> CRI-O <==
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.251604826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118917251583053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4f90b98-0c99-4cc4-89ff-7836d9768168 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.252155081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d5690d4-4c06-4c40-80d8-635a22a5a526 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.252251410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d5690d4-4c06-4c40-80d8-635a22a5a526 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.252291592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8d5690d4-4c06-4c40-80d8-635a22a5a526 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.286570566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=362c4aa9-14f3-499f-8878-89167b773212 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.286674884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=362c4aa9-14f3-499f-8878-89167b773212 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.287947593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e577266-276c-4921-bd63-1d1040d6862a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.288383408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118917288355864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e577266-276c-4921-bd63-1d1040d6862a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.288932356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d22bb95-1c21-4fb6-ae1c-70f6b88e89b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.289002967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d22bb95-1c21-4fb6-ae1c-70f6b88e89b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.289037963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3d22bb95-1c21-4fb6-ae1c-70f6b88e89b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.323277906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8205357-b81d-4c9c-8b46-026cab66acd8 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.323381872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8205357-b81d-4c9c-8b46-026cab66acd8 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.324500864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6175c41a-ef3a-48cd-bc3c-728bb078fb5f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.324976814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118917324952937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6175c41a-ef3a-48cd-bc3c-728bb078fb5f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.325744101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6852bdcd-b3ca-4573-8dad-a3dcbafdc55d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.325797160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6852bdcd-b3ca-4573-8dad-a3dcbafdc55d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.325838882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6852bdcd-b3ca-4573-8dad-a3dcbafdc55d name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.361017342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9527d75-d853-4d81-b6b9-cd8d2c82b589 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.361118184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9527d75-d853-4d81-b6b9-cd8d2c82b589 name=/runtime.v1.RuntimeService/Version
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.362086738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6712df4-ab62-4771-a507-e54df3a000d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.362541069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730118917362513383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6712df4-ab62-4771-a507-e54df3a000d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.363139699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=253cf4fe-8943-467b-8c7d-3454c6408226 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.363217606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=253cf4fe-8943-467b-8c7d-3454c6408226 name=/runtime.v1.RuntimeService/ListContainers
	Oct 28 12:35:17 old-k8s-version-089993 crio[635]: time="2024-10-28 12:35:17.363258210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=253cf4fe-8943-467b-8c7d-3454c6408226 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct28 12:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056040] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049869] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.987135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.705731] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.652068] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.124100] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.059356] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067583] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.203906] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.129426] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.273379] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Oct28 12:16] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.076324] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.030052] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[ +12.368021] kauditd_printk_skb: 46 callbacks suppressed
	[Oct28 12:20] systemd-fstab-generator[5004]: Ignoring "noauto" option for root device
	[Oct28 12:22] systemd-fstab-generator[5284]: Ignoring "noauto" option for root device
	[  +0.072681] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:35:17 up 19 min,  0 users,  load average: 0.28, 0.10, 0.06
	Linux old-k8s-version-089993 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: goroutine 154 [chan receive]:
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc00013a990)
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: goroutine 155 [select]:
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000109ef0, 0x4f0ac20, 0xc000cd7950, 0x1, 0xc00009e0c0)
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001f6ee0, 0xc00009e0c0)
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003304b0, 0xc00045ef80)
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 28 12:35:14 old-k8s-version-089993 kubelet[6764]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 28 12:35:15 old-k8s-version-089993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Oct 28 12:35:15 old-k8s-version-089993 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 28 12:35:15 old-k8s-version-089993 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 28 12:35:15 old-k8s-version-089993 kubelet[6773]: I1028 12:35:15.481024    6773 server.go:416] Version: v1.20.0
	Oct 28 12:35:15 old-k8s-version-089993 kubelet[6773]: I1028 12:35:15.481279    6773 server.go:837] Client rotation is on, will bootstrap in background
	Oct 28 12:35:15 old-k8s-version-089993 kubelet[6773]: I1028 12:35:15.483297    6773 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 28 12:35:15 old-k8s-version-089993 kubelet[6773]: W1028 12:35:15.484272    6773 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 28 12:35:15 old-k8s-version-089993 kubelet[6773]: I1028 12:35:15.484442    6773 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 2 (233.199361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-089993" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.08s)

                                                
                                    

Test pass (243/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.58
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 12.73
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
22 TestOffline 113.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 139.05
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.53
35 TestAddons/parallel/Registry 18.51
37 TestAddons/parallel/InspektorGadget 11.98
40 TestAddons/parallel/CSI 68.73
41 TestAddons/parallel/Headlamp 19.78
42 TestAddons/parallel/CloudSpanner 5.62
43 TestAddons/parallel/LocalPath 55.23
44 TestAddons/parallel/NvidiaDevicePlugin 6.59
45 TestAddons/parallel/Yakd 10.78
48 TestCertOptions 50.79
49 TestCertExpiration 269.15
51 TestForceSystemdFlag 70.98
52 TestForceSystemdEnv 50.71
54 TestKVMDriverInstallOrUpdate 4.57
58 TestErrorSpam/setup 42.61
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.79
61 TestErrorSpam/pause 1.69
62 TestErrorSpam/unpause 1.89
63 TestErrorSpam/stop 5.86
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 87.61
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 35.08
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.62
75 TestFunctional/serial/CacheCmd/cache/add_local 2.26
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 34.05
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.52
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 5.72
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 21.56
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.96
97 TestFunctional/parallel/ServiceCmdConnect 23.53
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 47.58
101 TestFunctional/parallel/SSHCmd 0.46
102 TestFunctional/parallel/CpCmd 1.36
103 TestFunctional/parallel/MySQL 22.72
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.4
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
113 TestFunctional/parallel/License 1.16
123 TestFunctional/parallel/ServiceCmd/DeployApp 22.29
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
125 TestFunctional/parallel/ProfileCmd/profile_list 0.39
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
127 TestFunctional/parallel/ServiceCmd/List 0.53
128 TestFunctional/parallel/MountCmd/any-port 10.92
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
131 TestFunctional/parallel/ServiceCmd/Format 0.39
132 TestFunctional/parallel/ServiceCmd/URL 0.4
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.4
137 TestFunctional/parallel/ImageCommands/ImageBuild 5.34
138 TestFunctional/parallel/ImageCommands/Setup 1.84
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.82
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.24
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.83
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.02
145 TestFunctional/parallel/MountCmd/specific-port 1.83
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
148 TestFunctional/parallel/Version/short 0.05
149 TestFunctional/parallel/Version/components 0.85
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 211.01
160 TestMultiControlPlane/serial/DeployApp 6.96
161 TestMultiControlPlane/serial/PingHostFromPods 1.26
162 TestMultiControlPlane/serial/AddWorkerNode 59.74
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
165 TestMultiControlPlane/serial/CopyFile 13.47
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.76
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
174 TestMultiControlPlane/serial/RestartCluster 354.64
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
176 TestMultiControlPlane/serial/AddSecondaryNode 79.75
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
181 TestJSONOutput/start/Command 56.88
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.37
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.47
213 TestMountStart/serial/StartWithMountFirst 27.36
214 TestMountStart/serial/VerifyMountFirst 0.39
215 TestMountStart/serial/StartWithMountSecond 27.95
216 TestMountStart/serial/VerifyMountSecond 0.39
217 TestMountStart/serial/DeleteFirst 0.88
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.38
220 TestMountStart/serial/RestartStopped 22.96
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 112.96
225 TestMultiNode/serial/DeployApp2Nodes 6.18
226 TestMultiNode/serial/PingHostFrom2Pods 0.8
227 TestMultiNode/serial/AddNode 53.08
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.59
230 TestMultiNode/serial/CopyFile 7.29
231 TestMultiNode/serial/StopNode 2.38
232 TestMultiNode/serial/StartAfterStop 41.22
234 TestMultiNode/serial/DeleteNode 2.25
236 TestMultiNode/serial/RestartMultiNode 178.43
237 TestMultiNode/serial/ValidateNameConflict 44.99
244 TestScheduledStopUnix 113.01
248 TestRunningBinaryUpgrade 192.51
253 TestStoppedBinaryUpgrade/Setup 2.51
254 TestPause/serial/Start 85.2
255 TestStoppedBinaryUpgrade/Upgrade 205
271 TestNetworkPlugins/group/false 3.22
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
277 TestNoKubernetes/serial/StartWithK8s 54.2
278 TestNoKubernetes/serial/StartWithStopK8s 39.44
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
280 TestNoKubernetes/serial/Start 49.2
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
282 TestNoKubernetes/serial/ProfileList 1.39
283 TestNoKubernetes/serial/Stop 1.35
284 TestNoKubernetes/serial/StartNoArgs 42.9
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
289 TestStartStop/group/no-preload/serial/FirstStart 113.07
290 TestStartStop/group/no-preload/serial/DeployApp 11.37
292 TestStartStop/group/embed-certs/serial/FirstStart 59.01
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.71
297 TestStartStop/group/embed-certs/serial/DeployApp 11.3
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
306 TestStartStop/group/no-preload/serial/SecondStart 650.49
308 TestStartStop/group/embed-certs/serial/SecondStart 552.43
309 TestStartStop/group/old-k8s-version/serial/Stop 3.29
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 511.3
323 TestStartStop/group/newest-cni/serial/FirstStart 48.23
324 TestNetworkPlugins/group/auto/Start 82.16
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.41
327 TestStartStop/group/newest-cni/serial/Stop 11.39
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/newest-cni/serial/SecondStart 44.44
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/newest-cni/serial/Pause 2.84
334 TestNetworkPlugins/group/kindnet/Start 62.49
335 TestNetworkPlugins/group/auto/KubeletFlags 0.21
336 TestNetworkPlugins/group/auto/NetCatPod 11.24
337 TestNetworkPlugins/group/auto/DNS 0.19
338 TestNetworkPlugins/group/auto/Localhost 0.18
339 TestNetworkPlugins/group/auto/HairPin 0.16
340 TestNetworkPlugins/group/calico/Start 82.25
341 TestNetworkPlugins/group/custom-flannel/Start 102.1
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
344 TestNetworkPlugins/group/kindnet/NetCatPod 11.2
345 TestNetworkPlugins/group/kindnet/DNS 0.17
346 TestNetworkPlugins/group/kindnet/Localhost 0.12
347 TestNetworkPlugins/group/kindnet/HairPin 0.16
348 TestNetworkPlugins/group/enable-default-cni/Start 56.29
349 TestNetworkPlugins/group/flannel/Start 100.06
350 TestNetworkPlugins/group/calico/ControllerPod 6.12
351 TestNetworkPlugins/group/calico/KubeletFlags 0.35
352 TestNetworkPlugins/group/calico/NetCatPod 13.3
353 TestNetworkPlugins/group/calico/DNS 0.15
354 TestNetworkPlugins/group/calico/Localhost 0.18
355 TestNetworkPlugins/group/calico/HairPin 0.14
356 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
357 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
360 TestNetworkPlugins/group/custom-flannel/DNS 0.22
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
363 TestNetworkPlugins/group/bridge/Start 94.79
364 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
365 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
366 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
369 TestNetworkPlugins/group/flannel/NetCatPod 12.25
370 TestNetworkPlugins/group/flannel/DNS 0.16
371 TestNetworkPlugins/group/flannel/Localhost 0.17
372 TestNetworkPlugins/group/flannel/HairPin 0.13
373 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
374 TestNetworkPlugins/group/bridge/NetCatPod 10.22
375 TestNetworkPlugins/group/bridge/DNS 0.15
376 TestNetworkPlugins/group/bridge/Localhost 0.12
377 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (27.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-114118 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-114118 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.577090992s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1028 10:55:04.950645  140303 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1028 10:55:04.950748  140303 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-114118
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-114118: exit status 85 (64.249157ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-114118 | jenkins | v1.34.0 | 28 Oct 24 10:54 UTC |          |
	|         | -p download-only-114118        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:54:37
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:54:37.417724  140314 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:54:37.417840  140314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:54:37.417850  140314 out.go:358] Setting ErrFile to fd 2...
	I1028 10:54:37.417853  140314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:54:37.418022  140314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	W1028 10:54:37.418161  140314 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19876-132631/.minikube/config/config.json: open /home/jenkins/minikube-integration/19876-132631/.minikube/config/config.json: no such file or directory
	I1028 10:54:37.418745  140314 out.go:352] Setting JSON to true
	I1028 10:54:37.419696  140314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2220,"bootTime":1730110657,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:54:37.419801  140314 start.go:139] virtualization: kvm guest
	I1028 10:54:37.422340  140314 out.go:97] [download-only-114118] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1028 10:54:37.422482  140314 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball: no such file or directory
	I1028 10:54:37.422567  140314 notify.go:220] Checking for updates...
	I1028 10:54:37.424075  140314 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:54:37.425837  140314 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:54:37.427684  140314 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 10:54:37.429549  140314 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:54:37.431322  140314 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 10:54:37.434541  140314 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:54:37.434795  140314 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:54:37.472247  140314 out.go:97] Using the kvm2 driver based on user configuration
	I1028 10:54:37.472288  140314 start.go:297] selected driver: kvm2
	I1028 10:54:37.472295  140314 start.go:901] validating driver "kvm2" against <nil>
	I1028 10:54:37.472628  140314 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:54:37.472722  140314 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 10:54:37.489221  140314 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 10:54:37.489276  140314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:54:37.489837  140314 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1028 10:54:37.490011  140314 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:54:37.490047  140314 cni.go:84] Creating CNI manager for ""
	I1028 10:54:37.490111  140314 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:54:37.490121  140314 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 10:54:37.490192  140314 start.go:340] cluster config:
	{Name:download-only-114118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-114118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:54:37.490494  140314 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:54:37.492546  140314 out.go:97] Downloading VM boot image ...
	I1028 10:54:37.492598  140314 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1028 10:54:48.924797  140314 out.go:97] Starting "download-only-114118" primary control-plane node in "download-only-114118" cluster
	I1028 10:54:48.924835  140314 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 10:54:49.029048  140314 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1028 10:54:49.029086  140314 cache.go:56] Caching tarball of preloaded images
	I1028 10:54:49.029260  140314 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1028 10:54:49.031290  140314 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1028 10:54:49.031316  140314 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1028 10:54:49.614348  140314 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-114118 host does not exist
	  To start a cluster, run: "minikube start -p download-only-114118"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-114118
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (12.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-553455 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-553455 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.724967642s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (12.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1028 10:55:18.010914  140303 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1028 10:55:18.010961  140303 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-553455
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-553455: exit status 85 (66.476065ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-114118 | jenkins | v1.34.0 | 28 Oct 24 10:54 UTC |                     |
	|         | -p download-only-114118        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| delete  | -p download-only-114118        | download-only-114118 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC | 28 Oct 24 10:55 UTC |
	| start   | -o=json --download-only        | download-only-553455 | jenkins | v1.34.0 | 28 Oct 24 10:55 UTC |                     |
	|         | -p download-only-553455        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/28 10:55:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1028 10:55:05.329543  140568 out.go:345] Setting OutFile to fd 1 ...
	I1028 10:55:05.329680  140568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:05.329689  140568 out.go:358] Setting ErrFile to fd 2...
	I1028 10:55:05.329694  140568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 10:55:05.329883  140568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 10:55:05.330463  140568 out.go:352] Setting JSON to true
	I1028 10:55:05.331440  140568 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2248,"bootTime":1730110657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 10:55:05.331507  140568 start.go:139] virtualization: kvm guest
	I1028 10:55:05.333977  140568 out.go:97] [download-only-553455] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 10:55:05.334149  140568 notify.go:220] Checking for updates...
	I1028 10:55:05.336092  140568 out.go:169] MINIKUBE_LOCATION=19876
	I1028 10:55:05.338097  140568 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 10:55:05.339848  140568 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 10:55:05.341652  140568 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 10:55:05.343426  140568 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1028 10:55:05.346747  140568 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1028 10:55:05.347095  140568 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 10:55:05.380926  140568 out.go:97] Using the kvm2 driver based on user configuration
	I1028 10:55:05.380960  140568 start.go:297] selected driver: kvm2
	I1028 10:55:05.380970  140568 start.go:901] validating driver "kvm2" against <nil>
	I1028 10:55:05.381563  140568 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:55:05.381671  140568 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19876-132631/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1028 10:55:05.397878  140568 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1028 10:55:05.397957  140568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1028 10:55:05.398494  140568 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1028 10:55:05.398627  140568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1028 10:55:05.398653  140568 cni.go:84] Creating CNI manager for ""
	I1028 10:55:05.398730  140568 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1028 10:55:05.398740  140568 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1028 10:55:05.398791  140568 start.go:340] cluster config:
	{Name:download-only-553455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-553455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 10:55:05.398887  140568 iso.go:125] acquiring lock: {Name:mk59dbe44ea43facc8bc783be0c660784bccad5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1028 10:55:05.400665  140568 out.go:97] Starting "download-only-553455" primary control-plane node in "download-only-553455" cluster
	I1028 10:55:05.400692  140568 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:55:05.952556  140568 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1028 10:55:05.952599  140568 cache.go:56] Caching tarball of preloaded images
	I1028 10:55:05.952755  140568 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1028 10:55:05.954652  140568 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1028 10:55:05.954674  140568 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1028 10:55:06.064057  140568 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/19876-132631/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-553455 host does not exist
	  To start a cluster, run: "minikube start -p download-only-553455"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-553455
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1028 10:55:18.630629  140303 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-110570 --alsologtostderr --binary-mirror http://127.0.0.1:43021 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-110570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-110570
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (113.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-721924 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-721924 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m52.400673722s)
helpers_test.go:175: Cleaning up "offline-crio-721924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-721924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-721924: (1.056271904s)
--- PASS: TestOffline (113.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-892779
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-892779: exit status 85 (55.906228ms)

                                                
                                                
-- stdout --
	* Profile "addons-892779" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-892779"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-892779
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-892779: exit status 85 (57.98881ms)

                                                
                                                
-- stdout --
	* Profile "addons-892779" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-892779"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (139.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-892779 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-892779 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.050773287s)
--- PASS: TestAddons/Setup (139.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-892779 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-892779 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-892779 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-892779 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da189efe-7ffa-4bdf-87b1-c414bec80098] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da189efe-7ffa-4bdf-87b1-c414bec80098] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.007912992s
addons_test.go:633: (dbg) Run:  kubectl --context addons-892779 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-892779 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-892779 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.718362ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-rnl5j" [5e520c13-81a2-4ebf-ab10-4fecd61cddd7] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003665322s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7cjwq" [55548851-badf-40ba-a4b8-18d300af90f3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.059806052s
addons_test.go:331: (dbg) Run:  kubectl --context addons-892779 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-892779 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-892779 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.684121684s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 ip
2024/10/28 10:58:14 [DEBUG] GET http://192.168.39.106:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7shj9" [ddab8a12-b422-4085-894f-b536c7132928] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004806388s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable inspektor-gadget --alsologtostderr -v=1: (5.976018008s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1028 10:58:21.858146  140303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1028 10:58:21.864042  140303 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1028 10:58:21.864070  140303 kapi.go:107] duration metric: took 5.938266ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.948883ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-892779 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-892779 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b53b4cb8-2081-419b-99d8-3a6ea20b02fa] Pending
helpers_test.go:344: "task-pv-pod" [b53b4cb8-2081-419b-99d8-3a6ea20b02fa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b53b4cb8-2081-419b-99d8-3a6ea20b02fa] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00478718s
addons_test.go:511: (dbg) Run:  kubectl --context addons-892779 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-892779 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-892779 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-892779 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-892779 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-892779 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-892779 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [37146935-06b9-41de-99c2-dec4e6254a90] Pending
helpers_test.go:344: "task-pv-pod-restore" [37146935-06b9-41de-99c2-dec4e6254a90] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [37146935-06b9-41de-99c2-dec4e6254a90] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00444923s
addons_test.go:553: (dbg) Run:  kubectl --context addons-892779 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-892779 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-892779 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable volumesnapshots --alsologtostderr -v=1: (1.001235528s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.868732475s)
--- PASS: TestAddons/parallel/CSI (68.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-892779 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-x5d5w" [f6712170-0222-40f6-b44a-7563cec79249] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-x5d5w" [f6712170-0222-40f6-b44a-7563cec79249] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-x5d5w" [f6712170-0222-40f6-b44a-7563cec79249] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004531906s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable headlamp --alsologtostderr -v=1: (5.817239253s)
--- PASS: TestAddons/parallel/Headlamp (19.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-h92jl" [1d42cb8a-809f-4f9c-ba73-0879c4db7f8f] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005210787s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-892779 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-892779 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [976239d8-cbbb-497d-8c72-c02cf7ed862e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [976239d8-cbbb-497d-8c72-c02cf7ed862e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [976239d8-cbbb-497d-8c72-c02cf7ed862e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004138187s
addons_test.go:906: (dbg) Run:  kubectl --context addons-892779 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 ssh "cat /opt/local-path-provisioner/pvc-89c5613b-7edc-42a1-8a07-f72dc621843c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-892779 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-892779 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.422778015s)
--- PASS: TestAddons/parallel/LocalPath (55.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-n492w" [17f0e2c2-6431-4f75-84a5-c4ccbb03c69f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004829964s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-p59vp" [1618c069-a145-4dfe-aae2-b26d7a24087c] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004976832s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-892779 addons disable yakd --alsologtostderr -v=1: (5.770628107s)
--- PASS: TestAddons/parallel/Yakd (10.78s)

                                                
                                    
x
+
TestCertOptions (50.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-961573 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-961573 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (49.317092814s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-961573 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-961573 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-961573 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-961573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-961573
--- PASS: TestCertOptions (50.79s)

                                                
                                    
x
+
TestCertExpiration (269.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-601400 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-601400 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (49.13242336s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-601400 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-601400 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.007786567s)
helpers_test.go:175: Cleaning up "cert-expiration-601400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-601400
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-601400: (1.012223368s)
--- PASS: TestCertExpiration (269.15s)

                                                
                                    
x
+
TestForceSystemdFlag (70.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-320662 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-320662 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.707099707s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-320662 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-320662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-320662
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-320662: (1.05334539s)
--- PASS: TestForceSystemdFlag (70.98s)

                                                
                                    
x
+
TestForceSystemdEnv (50.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-771167 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-771167 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.721312242s)
helpers_test.go:175: Cleaning up "force-systemd-env-771167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-771167
--- PASS: TestForceSystemdEnv (50.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.57s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1028 12:05:11.009132  140303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 12:05:11.009307  140303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1028 12:05:11.039201  140303 install.go:62] docker-machine-driver-kvm2: exit status 1
W1028 12:05:11.039551  140303 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 12:05:11.039621  140303 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate890335807/001/docker-machine-driver-kvm2
I1028 12:05:11.368080  140303 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate890335807/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000015c40 gz:0xc000015c48 tar:0xc000015af0 tar.bz2:0xc000015b00 tar.gz:0xc000015b10 tar.xz:0xc000015c20 tar.zst:0xc000015c30 tbz2:0xc000015b00 tgz:0xc000015b10 txz:0xc000015c20 tzst:0xc000015c30 xz:0xc000015c50 zip:0xc000015c60 zst:0xc000015c58] Getters:map[file:0xc000906700 http:0xc000824370 https:0xc0008243c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1028 12:05:11.368127  140303 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate890335807/001/docker-machine-driver-kvm2
I1028 12:05:13.730471  140303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 12:05:13.730557  140303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1028 12:05:13.761428  140303 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1028 12:05:13.761458  140303 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1028 12:05:13.761515  140303 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1028 12:05:13.761563  140303 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate890335807/002/docker-machine-driver-kvm2
I1028 12:05:13.825687  140303 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate890335807/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc000015c40 gz:0xc000015c48 tar:0xc000015af0 tar.bz2:0xc000015b00 tar.gz:0xc000015b10 tar.xz:0xc000015c20 tar.zst:0xc000015c30 tbz2:0xc000015b00 tgz:0xc000015b10 txz:0xc000015c20 tzst:0xc000015c30 xz:0xc000015c50 zip:0xc000015c60 zst:0xc000015c58] Getters:map[file:0xc001b33910 http:0xc000985e50 https:0xc000985ea0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1028 12:05:13.825742  140303 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate890335807/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.57s)

                                                
                                    
x
+
TestErrorSpam/setup (42.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-700631 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-700631 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-700631 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-700631 --driver=kvm2  --container-runtime=crio: (42.612094809s)
--- PASS: TestErrorSpam/setup (42.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (5.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 stop: (2.318550063s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 stop: (1.933253259s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-700631 --log_dir /tmp/nospam-700631 stop: (1.612655272s)
--- PASS: TestErrorSpam/stop (5.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19876-132631/.minikube/files/etc/test/nested/copy/140303/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452974 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1028 11:07:38.998866  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.005306  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.016794  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.038259  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.079665  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.161190  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.322823  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:39.644633  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:40.286402  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:41.568761  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:44.130791  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:49.252210  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:07:59.493590  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:08:19.975046  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-452974 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m27.60958822s)
--- PASS: TestFunctional/serial/StartWithProxy (87.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1028 11:08:43.350994  140303 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452974 --alsologtostderr -v=8
E1028 11:09:00.936846  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-452974 --alsologtostderr -v=8: (35.074419883s)
functional_test.go:663: soft start took 35.075209264s for "functional-452974" cluster.
I1028 11:09:18.425822  140303 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (35.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-452974 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 cache add registry.k8s.io/pause:3.1: (1.160647106s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 cache add registry.k8s.io/pause:3.3: (1.305261978s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 cache add registry.k8s.io/pause:latest: (1.158680483s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-452974 /tmp/TestFunctionalserialCacheCmdcacheadd_local2353823627/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cache add minikube-local-cache-test:functional-452974
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 cache add minikube-local-cache-test:functional-452974: (1.898598749s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cache delete minikube-local-cache-test:functional-452974
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-452974
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.63959ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 cache reload: (1.021273011s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 kubectl -- --context functional-452974 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-452974 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452974 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-452974 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.052106211s)
functional_test.go:761: restart took 34.052235101s for "functional-452974" cluster.
I1028 11:10:00.855126  140303 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-452974 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 logs: (1.517417865s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 logs --file /tmp/TestFunctionalserialLogsFileCmd1038330017/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 logs --file /tmp/TestFunctionalserialLogsFileCmd1038330017/001/logs.txt: (1.510273882s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-452974 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-452974
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-452974: exit status 115 (289.221306ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.56:32111 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-452974 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-452974 delete -f testdata/invalidsvc.yaml: (2.231269951s)
--- PASS: TestFunctional/serial/InvalidService (5.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 config get cpus: exit status 14 (73.831964ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 config get cpus: exit status 14 (55.138939ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452974 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452974 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 149324: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452974 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-452974 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.79797ms)

                                                
                                                
-- stdout --
	* [functional-452974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:10:35.877108  149213 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:10:35.877212  149213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:35.877221  149213 out.go:358] Setting ErrFile to fd 2...
	I1028 11:10:35.877228  149213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:35.877443  149213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:10:35.878133  149213 out.go:352] Setting JSON to false
	I1028 11:10:35.879123  149213 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3179,"bootTime":1730110657,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:10:35.879229  149213 start.go:139] virtualization: kvm guest
	I1028 11:10:35.881307  149213 out.go:177] * [functional-452974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 11:10:35.882804  149213 notify.go:220] Checking for updates...
	I1028 11:10:35.882898  149213 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:10:35.884524  149213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:10:35.885781  149213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:10:35.887120  149213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:35.888374  149213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:10:35.889720  149213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:10:35.891618  149213 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:10:35.892229  149213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:35.892337  149213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:35.909809  149213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46529
	I1028 11:10:35.910374  149213 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:35.911001  149213 main.go:141] libmachine: Using API Version  1
	I1028 11:10:35.911018  149213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:35.911325  149213 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:35.911521  149213 main.go:141] libmachine: (functional-452974) Calling .DriverName
	I1028 11:10:35.911787  149213 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:10:35.912109  149213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:35.912154  149213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:35.929055  149213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
	I1028 11:10:35.929568  149213 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:35.930150  149213 main.go:141] libmachine: Using API Version  1
	I1028 11:10:35.930181  149213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:35.930526  149213 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:35.930724  149213 main.go:141] libmachine: (functional-452974) Calling .DriverName
	I1028 11:10:35.967903  149213 out.go:177] * Using the kvm2 driver based on existing profile
	I1028 11:10:35.969270  149213 start.go:297] selected driver: kvm2
	I1028 11:10:35.969289  149213 start.go:901] validating driver "kvm2" against &{Name:functional-452974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-452974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:10:35.969420  149213 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:10:35.972362  149213 out.go:201] 
	W1028 11:10:35.973674  149213 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1028 11:10:35.975088  149213 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452974 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452974 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-452974 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.805046ms)

                                                
                                                
-- stdout --
	* [functional-452974] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:10:35.730104  149186 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:10:35.730222  149186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:35.730233  149186 out.go:358] Setting ErrFile to fd 2...
	I1028 11:10:35.730237  149186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:10:35.730520  149186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:10:35.731078  149186 out.go:352] Setting JSON to false
	I1028 11:10:35.732145  149186 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3179,"bootTime":1730110657,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 11:10:35.732255  149186 start.go:139] virtualization: kvm guest
	I1028 11:10:35.734428  149186 out.go:177] * [functional-452974] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1028 11:10:35.736472  149186 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 11:10:35.736536  149186 notify.go:220] Checking for updates...
	I1028 11:10:35.739022  149186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 11:10:35.740372  149186 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 11:10:35.741652  149186 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 11:10:35.743138  149186 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 11:10:35.744793  149186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 11:10:35.746844  149186 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:10:35.747442  149186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:35.747533  149186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:35.762992  149186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I1028 11:10:35.763431  149186 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:35.763993  149186 main.go:141] libmachine: Using API Version  1
	I1028 11:10:35.764020  149186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:35.764409  149186 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:35.764573  149186 main.go:141] libmachine: (functional-452974) Calling .DriverName
	I1028 11:10:35.764801  149186 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 11:10:35.765096  149186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:10:35.765131  149186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:10:35.785572  149186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35829
	I1028 11:10:35.786160  149186 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:10:35.786677  149186 main.go:141] libmachine: Using API Version  1
	I1028 11:10:35.786697  149186 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:10:35.787083  149186 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:10:35.787295  149186 main.go:141] libmachine: (functional-452974) Calling .DriverName
	I1028 11:10:35.822080  149186 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1028 11:10:35.823302  149186 start.go:297] selected driver: kvm2
	I1028 11:10:35.823317  149186 start.go:901] validating driver "kvm2" against &{Name:functional-452974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-452974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1028 11:10:35.823413  149186 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 11:10:35.825386  149186 out.go:201] 
	W1028 11:10:35.826739  149186 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1028 11:10:35.828157  149186 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-452974 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-452974 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-r8phg" [550d5b64-a535-4d14-aa47-6de99de303fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-r8phg" [550d5b64-a535-4d14-aa47-6de99de303fb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.004536744s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.56:30197
functional_test.go:1675: http://192.168.39.56:30197: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-r8phg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.56:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.56:30197
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0154c6d7-9ba6-447a-88d0-b66e07b70b24] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003897416s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-452974 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-452974 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-452974 get pvc myclaim -o=json
I1028 11:10:15.852666  140303 retry.go:31] will retry after 2.889204165s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1603952b-f609-4fc8-84a1-ea08a0d05b97 ResourceVersion:765 Generation:0 CreationTimestamp:2024-10-28 11:10:15 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001accc60 VolumeMode:0xc001accc70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-452974 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-452974 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f7cbeaf0-c532-478a-9370-dbf9baf1dd2c] Pending
helpers_test.go:344: "sp-pod" [f7cbeaf0-c532-478a-9370-dbf9baf1dd2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1028 11:10:22.858948  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [f7cbeaf0-c532-478a-9370-dbf9baf1dd2c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004093253s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-452974 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-452974 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-452974 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [390a40c0-7602-489d-905d-6e08c49dbbf0] Pending
helpers_test.go:344: "sp-pod" [390a40c0-7602-489d-905d-6e08c49dbbf0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [390a40c0-7602-489d-905d-6e08c49dbbf0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003879608s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-452974 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh -n functional-452974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cp functional-452974:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2516494399/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh -n functional-452974 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh -n functional-452974 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-452974 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-5nl5f" [ad791359-fcbf-4fb0-870c-76a30a89b87e] Pending
helpers_test.go:344: "mysql-6cdb49bbb-5nl5f" [ad791359-fcbf-4fb0-870c-76a30a89b87e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-5nl5f" [ad791359-fcbf-4fb0-870c-76a30a89b87e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004192923s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-452974 exec mysql-6cdb49bbb-5nl5f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-452974 exec mysql-6cdb49bbb-5nl5f -- mysql -ppassword -e "show databases;": exit status 1 (409.035443ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1028 11:10:31.299735  140303 retry.go:31] will retry after 818.782233ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-452974 exec mysql-6cdb49bbb-5nl5f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/140303/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /etc/test/nested/copy/140303/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/140303.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /etc/ssl/certs/140303.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/140303.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /usr/share/ca-certificates/140303.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1403032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /etc/ssl/certs/1403032.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1403032.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /usr/share/ca-certificates/1403032.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-452974 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh "sudo systemctl is-active docker": exit status 1 (254.385404ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh "sudo systemctl is-active containerd": exit status 1 (246.019321ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.163114059s)
--- PASS: TestFunctional/parallel/License (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-452974 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-452974 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-csq58" [48daa114-e1d0-443d-a6ee-dd530caa16e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-csq58" [48daa114-e1d0-443d-a6ee-dd530caa16e8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.105433821s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "339.890505ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.390477ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "446.885418ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.026124ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdany-port140405747/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730113833744845431" to /tmp/TestFunctionalparallelMountCmdany-port140405747/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730113833744845431" to /tmp/TestFunctionalparallelMountCmdany-port140405747/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730113833744845431" to /tmp/TestFunctionalparallelMountCmdany-port140405747/001/test-1730113833744845431
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.864884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:10:34.000021  140303 retry.go:31] will retry after 582.985824ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 28 11:10 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 28 11:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 28 11:10 test-1730113833744845431
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh cat /mount-9p/test-1730113833744845431
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-452974 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cd11b377-1516-47ae-9797-1252ced71a2c] Pending
helpers_test.go:344: "busybox-mount" [cd11b377-1516-47ae-9797-1252ced71a2c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cd11b377-1516-47ae-9797-1252ced71a2c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cd11b377-1516-47ae-9797-1252ced71a2c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004491682s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-452974 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdany-port140405747/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 service list -o json
functional_test.go:1494: Took "515.339015ms" to run "out/minikube-linux-amd64 -p functional-452974 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.56:32528
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.56:32528
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452974 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-452974
localhost/kicbase/echo-server:functional-452974
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452974 image ls --format short --alsologtostderr:
I1028 11:10:48.031824  150272 out.go:345] Setting OutFile to fd 1 ...
I1028 11:10:48.032087  150272 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.032098  150272 out.go:358] Setting ErrFile to fd 2...
I1028 11:10:48.032102  150272 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.032321  150272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
I1028 11:10:48.032975  150272 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.033097  150272 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.033459  150272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.033511  150272 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.049089  150272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
I1028 11:10:48.049666  150272 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.050434  150272 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.050490  150272 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.050945  150272 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.051170  150272 main.go:141] libmachine: (functional-452974) Calling .GetState
I1028 11:10:48.053034  150272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.053073  150272 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.070285  150272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
I1028 11:10:48.071017  150272 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.071574  150272 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.071602  150272 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.071972  150272 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.072189  150272 main.go:141] libmachine: (functional-452974) Calling .DriverName
I1028 11:10:48.072379  150272 ssh_runner.go:195] Run: systemctl --version
I1028 11:10:48.072414  150272 main.go:141] libmachine: (functional-452974) Calling .GetSSHHostname
I1028 11:10:48.075822  150272 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.076118  150272 main.go:141] libmachine: (functional-452974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:f0:70", ip: ""} in network mk-functional-452974: {Iface:virbr1 ExpiryTime:2024-10-28 12:07:31 +0000 UTC Type:0 Mac:52:54:00:54:f0:70 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-452974 Clientid:01:52:54:00:54:f0:70}
I1028 11:10:48.076154  150272 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined IP address 192.168.39.56 and MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.076432  150272 main.go:141] libmachine: (functional-452974) Calling .GetSSHPort
I1028 11:10:48.076573  150272 main.go:141] libmachine: (functional-452974) Calling .GetSSHKeyPath
I1028 11:10:48.076702  150272 main.go:141] libmachine: (functional-452974) Calling .GetSSHUsername
I1028 11:10:48.076875  150272 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/functional-452974/id_rsa Username:docker}
I1028 11:10:48.197064  150272 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:10:48.276294  150272 main.go:141] libmachine: Making call to close driver server
I1028 11:10:48.276310  150272 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:48.276634  150272 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:48.276654  150272 main.go:141] libmachine: (functional-452974) DBG | Closing plugin on server side
I1028 11:10:48.276659  150272 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:48.276683  150272 main.go:141] libmachine: Making call to close driver server
I1028 11:10:48.276695  150272 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:48.276920  150272 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:48.276934  150272 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:48.276959  150272 main.go:141] libmachine: (functional-452974) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452974 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-452974  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-452974  | 9b4df12b4201d | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452974 image ls --format table --alsologtostderr:
I1028 11:10:48.774232  150410 out.go:345] Setting OutFile to fd 1 ...
I1028 11:10:48.774378  150410 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.774388  150410 out.go:358] Setting ErrFile to fd 2...
I1028 11:10:48.774396  150410 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.774572  150410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
I1028 11:10:48.775178  150410 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.775325  150410 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.775709  150410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.775766  150410 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.791150  150410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
I1028 11:10:48.791663  150410 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.792300  150410 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.792333  150410 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.792686  150410 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.792907  150410 main.go:141] libmachine: (functional-452974) Calling .GetState
I1028 11:10:48.794945  150410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.794986  150410 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.819205  150410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
I1028 11:10:48.819741  150410 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.820334  150410 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.820349  150410 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.820913  150410 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.821047  150410 main.go:141] libmachine: (functional-452974) Calling .DriverName
I1028 11:10:48.821184  150410 ssh_runner.go:195] Run: systemctl --version
I1028 11:10:48.821204  150410 main.go:141] libmachine: (functional-452974) Calling .GetSSHHostname
I1028 11:10:48.824339  150410 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.824772  150410 main.go:141] libmachine: (functional-452974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:f0:70", ip: ""} in network mk-functional-452974: {Iface:virbr1 ExpiryTime:2024-10-28 12:07:31 +0000 UTC Type:0 Mac:52:54:00:54:f0:70 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-452974 Clientid:01:52:54:00:54:f0:70}
I1028 11:10:48.824799  150410 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined IP address 192.168.39.56 and MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.825003  150410 main.go:141] libmachine: (functional-452974) Calling .GetSSHPort
I1028 11:10:48.825686  150410 main.go:141] libmachine: (functional-452974) Calling .GetSSHKeyPath
I1028 11:10:48.825865  150410 main.go:141] libmachine: (functional-452974) Calling .GetSSHUsername
I1028 11:10:48.826026  150410 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/functional-452974/id_rsa Username:docker}
I1028 11:10:48.997729  150410 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:10:49.101982  150410 main.go:141] libmachine: Making call to close driver server
I1028 11:10:49.101998  150410 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:49.102296  150410 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:49.102312  150410 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:49.102336  150410 main.go:141] libmachine: Making call to close driver server
I1028 11:10:49.102347  150410 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:49.102366  150410 main.go:141] libmachine: (functional-452974) DBG | Closing plugin on server side
I1028 11:10:49.102611  150410 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:49.102631  150410 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452974 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-452974"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af
3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDiges
ts":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e
9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b445
03","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-miniku
be/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9b4df12b4201dfc3179773a4a1827e9d74ccdfca7f125cfe3165ac6773c3cae2","repoDigests":["localhost/minikube-local-cache-test@sha256:a5cd85b5e909e5189551d32c606f48bc9cf7b8d39b4660202e39f804e32d72c6"],"repoTags":["localhost/minikube-local-cache-test:functional-452974"],"size":"3330"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-schedul
er@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452974 image ls --format json --alsologtostderr:
I1028 11:10:48.433896  150343 out.go:345] Setting OutFile to fd 1 ...
I1028 11:10:48.434014  150343 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.434024  150343 out.go:358] Setting ErrFile to fd 2...
I1028 11:10:48.434036  150343 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.434223  150343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
I1028 11:10:48.434825  150343 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.434925  150343 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.435295  150343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.435351  150343 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.451068  150343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
I1028 11:10:48.451671  150343 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.452379  150343 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.452404  150343 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.452821  150343 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.453021  150343 main.go:141] libmachine: (functional-452974) Calling .GetState
I1028 11:10:48.455279  150343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.455345  150343 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.471773  150343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
I1028 11:10:48.472337  150343 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.472776  150343 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.472799  150343 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.473111  150343 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.473288  150343 main.go:141] libmachine: (functional-452974) Calling .DriverName
I1028 11:10:48.473508  150343 ssh_runner.go:195] Run: systemctl --version
I1028 11:10:48.473556  150343 main.go:141] libmachine: (functional-452974) Calling .GetSSHHostname
I1028 11:10:48.476351  150343 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.476837  150343 main.go:141] libmachine: (functional-452974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:f0:70", ip: ""} in network mk-functional-452974: {Iface:virbr1 ExpiryTime:2024-10-28 12:07:31 +0000 UTC Type:0 Mac:52:54:00:54:f0:70 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-452974 Clientid:01:52:54:00:54:f0:70}
I1028 11:10:48.476876  150343 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined IP address 192.168.39.56 and MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.477051  150343 main.go:141] libmachine: (functional-452974) Calling .GetSSHPort
I1028 11:10:48.477228  150343 main.go:141] libmachine: (functional-452974) Calling .GetSSHKeyPath
I1028 11:10:48.477462  150343 main.go:141] libmachine: (functional-452974) Calling .GetSSHUsername
I1028 11:10:48.477673  150343 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/functional-452974/id_rsa Username:docker}
I1028 11:10:48.624094  150343 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:10:48.720712  150343 main.go:141] libmachine: Making call to close driver server
I1028 11:10:48.720737  150343 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:48.721024  150343 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:48.721040  150343 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:48.721061  150343 main.go:141] libmachine: Making call to close driver server
I1028 11:10:48.721073  150343 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:48.721299  150343 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:48.721313  150343 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:48.721336  150343 main.go:141] libmachine: (functional-452974) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452974 image ls --format yaml --alsologtostderr:
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-452974
size: "4943877"
- id: 9b4df12b4201dfc3179773a4a1827e9d74ccdfca7f125cfe3165ac6773c3cae2
repoDigests:
- localhost/minikube-local-cache-test@sha256:a5cd85b5e909e5189551d32c606f48bc9cf7b8d39b4660202e39f804e32d72c6
repoTags:
- localhost/minikube-local-cache-test:functional-452974
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452974 image ls --format yaml --alsologtostderr:
I1028 11:10:48.034890  150273 out.go:345] Setting OutFile to fd 1 ...
I1028 11:10:48.035063  150273 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.035079  150273 out.go:358] Setting ErrFile to fd 2...
I1028 11:10:48.035085  150273 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.035394  150273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
I1028 11:10:48.036244  150273 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.036408  150273 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.036940  150273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.037011  150273 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.053192  150273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
I1028 11:10:48.053894  150273 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.054638  150273 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.054660  150273 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.055105  150273 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.055290  150273 main.go:141] libmachine: (functional-452974) Calling .GetState
I1028 11:10:48.057339  150273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.057389  150273 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.073480  150273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
I1028 11:10:48.074100  150273 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.074654  150273 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.074679  150273 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.075100  150273 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.075327  150273 main.go:141] libmachine: (functional-452974) Calling .DriverName
I1028 11:10:48.075573  150273 ssh_runner.go:195] Run: systemctl --version
I1028 11:10:48.075605  150273 main.go:141] libmachine: (functional-452974) Calling .GetSSHHostname
I1028 11:10:48.078876  150273 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.079354  150273 main.go:141] libmachine: (functional-452974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:f0:70", ip: ""} in network mk-functional-452974: {Iface:virbr1 ExpiryTime:2024-10-28 12:07:31 +0000 UTC Type:0 Mac:52:54:00:54:f0:70 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-452974 Clientid:01:52:54:00:54:f0:70}
I1028 11:10:48.079376  150273 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined IP address 192.168.39.56 and MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.079489  150273 main.go:141] libmachine: (functional-452974) Calling .GetSSHPort
I1028 11:10:48.079661  150273 main.go:141] libmachine: (functional-452974) Calling .GetSSHKeyPath
I1028 11:10:48.079792  150273 main.go:141] libmachine: (functional-452974) Calling .GetSSHUsername
I1028 11:10:48.079919  150273 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/functional-452974/id_rsa Username:docker}
I1028 11:10:48.232474  150273 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:10:48.371164  150273 main.go:141] libmachine: Making call to close driver server
I1028 11:10:48.371185  150273 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:48.371466  150273 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:48.371491  150273 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:48.371504  150273 main.go:141] libmachine: Making call to close driver server
I1028 11:10:48.371513  150273 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:48.371766  150273 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:48.371788  150273 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh pgrep buildkitd: exit status 1 (278.673269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image build -t localhost/my-image:functional-452974 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 image build -t localhost/my-image:functional-452974 testdata/build --alsologtostderr: (4.843273475s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452974 image build -t localhost/my-image:functional-452974 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b2de51f356e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-452974
--> ba661add077
Successfully tagged localhost/my-image:functional-452974
ba661add077b5c1cc1084ac02b922a64014c0455bf6364370dd8ff6c5ccffe2d
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452974 image build -t localhost/my-image:functional-452974 testdata/build --alsologtostderr:
I1028 11:10:48.610223  150387 out.go:345] Setting OutFile to fd 1 ...
I1028 11:10:48.610369  150387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.610378  150387 out.go:358] Setting ErrFile to fd 2...
I1028 11:10:48.610383  150387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:10:48.610547  150387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
I1028 11:10:48.611127  150387 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.611652  150387 config.go:182] Loaded profile config "functional-452974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1028 11:10:48.612075  150387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.612139  150387 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.627713  150387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
I1028 11:10:48.628233  150387 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.628797  150387 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.628822  150387 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.629282  150387 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.629585  150387 main.go:141] libmachine: (functional-452974) Calling .GetState
I1028 11:10:48.631710  150387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1028 11:10:48.631792  150387 main.go:141] libmachine: Launching plugin server for driver kvm2
I1028 11:10:48.647927  150387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
I1028 11:10:48.648628  150387 main.go:141] libmachine: () Calling .GetVersion
I1028 11:10:48.649457  150387 main.go:141] libmachine: Using API Version  1
I1028 11:10:48.649495  150387 main.go:141] libmachine: () Calling .SetConfigRaw
I1028 11:10:48.649844  150387 main.go:141] libmachine: () Calling .GetMachineName
I1028 11:10:48.650054  150387 main.go:141] libmachine: (functional-452974) Calling .DriverName
I1028 11:10:48.650249  150387 ssh_runner.go:195] Run: systemctl --version
I1028 11:10:48.650283  150387 main.go:141] libmachine: (functional-452974) Calling .GetSSHHostname
I1028 11:10:48.653154  150387 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.653518  150387 main.go:141] libmachine: (functional-452974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:f0:70", ip: ""} in network mk-functional-452974: {Iface:virbr1 ExpiryTime:2024-10-28 12:07:31 +0000 UTC Type:0 Mac:52:54:00:54:f0:70 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-452974 Clientid:01:52:54:00:54:f0:70}
I1028 11:10:48.653565  150387 main.go:141] libmachine: (functional-452974) DBG | domain functional-452974 has defined IP address 192.168.39.56 and MAC address 52:54:00:54:f0:70 in network mk-functional-452974
I1028 11:10:48.653643  150387 main.go:141] libmachine: (functional-452974) Calling .GetSSHPort
I1028 11:10:48.653806  150387 main.go:141] libmachine: (functional-452974) Calling .GetSSHKeyPath
I1028 11:10:48.653932  150387 main.go:141] libmachine: (functional-452974) Calling .GetSSHUsername
I1028 11:10:48.654082  150387 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/functional-452974/id_rsa Username:docker}
I1028 11:10:48.768378  150387 build_images.go:161] Building image from path: /tmp/build.2565518435.tar
I1028 11:10:48.768481  150387 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1028 11:10:48.795326  150387 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2565518435.tar
I1028 11:10:48.812890  150387 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2565518435.tar: stat -c "%s %y" /var/lib/minikube/build/build.2565518435.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2565518435.tar': No such file or directory
I1028 11:10:48.812930  150387 ssh_runner.go:362] scp /tmp/build.2565518435.tar --> /var/lib/minikube/build/build.2565518435.tar (3072 bytes)
I1028 11:10:48.914536  150387 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2565518435
I1028 11:10:48.935306  150387 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2565518435 -xf /var/lib/minikube/build/build.2565518435.tar
I1028 11:10:48.965017  150387 crio.go:315] Building image: /var/lib/minikube/build/build.2565518435
I1028 11:10:48.965092  150387 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-452974 /var/lib/minikube/build/build.2565518435 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1028 11:10:53.376016  150387 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-452974 /var/lib/minikube/build/build.2565518435 --cgroup-manager=cgroupfs: (4.410895313s)
I1028 11:10:53.376083  150387 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2565518435
I1028 11:10:53.387722  150387 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2565518435.tar
I1028 11:10:53.398694  150387 build_images.go:217] Built localhost/my-image:functional-452974 from /tmp/build.2565518435.tar
I1028 11:10:53.398744  150387 build_images.go:133] succeeded building to: functional-452974
I1028 11:10:53.398751  150387 build_images.go:134] failed building to: 
I1028 11:10:53.398774  150387 main.go:141] libmachine: Making call to close driver server
I1028 11:10:53.398786  150387 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:53.399137  150387 main.go:141] libmachine: (functional-452974) DBG | Closing plugin on server side
I1028 11:10:53.399143  150387 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:53.399160  150387 main.go:141] libmachine: Making call to close connection to plugin binary
I1028 11:10:53.399173  150387 main.go:141] libmachine: Making call to close driver server
I1028 11:10:53.399187  150387 main.go:141] libmachine: (functional-452974) Calling .Close
I1028 11:10:53.399406  150387 main.go:141] libmachine: Successfully made call to close driver server
I1028 11:10:53.399418  150387 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls
2024/10/28 11:10:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.815628886s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-452974
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image load --daemon kicbase/echo-server:functional-452974 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 image load --daemon kicbase/echo-server:functional-452974 --alsologtostderr: (1.601985031s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image load --daemon kicbase/echo-server:functional-452974 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-452974
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image load --daemon kicbase/echo-server:functional-452974 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-452974 image load --daemon kicbase/echo-server:functional-452974 --alsologtostderr: (1.109763996s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image save kicbase/echo-server:functional-452974 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image rm kicbase/echo-server:functional-452974 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdspecific-port3059898244/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (250.64649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:10:44.913575  140303 retry.go:31] will retry after 421.373606ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdspecific-port3059898244/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh "sudo umount -f /mount-9p": exit status 1 (251.204129ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-452974 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdspecific-port3059898244/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-452974
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 image save --daemon kicbase/echo-server:functional-452974 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-452974
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097285394/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097285394/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097285394/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T" /mount1: exit status 1 (274.956825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1028 11:10:46.766980  140303 retry.go:31] will retry after 384.012969ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-452974 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097285394/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097285394/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452974 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097285394/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-452974 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-452974
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-452974
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-452974
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-928358 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 11:12:38.998157  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:13:06.700893  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-928358 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m30.321248484s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (211.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-928358 -- rollout status deployment/busybox: (4.627093796s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-dnw8z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-h8ctp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-tx5tk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-dnw8z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-h8ctp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-tx5tk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-dnw8z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-h8ctp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-tx5tk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-dnw8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-dnw8z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-h8ctp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-h8ctp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-tx5tk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-928358 -- exec busybox-7dff88458-tx5tk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-928358 -v=7 --alsologtostderr
E1028 11:15:09.886920  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:09.893401  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:09.904846  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:09.926276  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:09.967809  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:10.049251  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:10.210994  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:10.532417  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:11.174210  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:12.456474  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:15.018275  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:20.139732  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:15:30.381971  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-928358 -v=7 --alsologtostderr: (58.868940729s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-928358 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp testdata/cp-test.txt ha-928358:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358:/home/docker/cp-test.txt ha-928358-m02:/home/docker/cp-test_ha-928358_ha-928358-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test_ha-928358_ha-928358-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358:/home/docker/cp-test.txt ha-928358-m03:/home/docker/cp-test_ha-928358_ha-928358-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test_ha-928358_ha-928358-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358:/home/docker/cp-test.txt ha-928358-m04:/home/docker/cp-test_ha-928358_ha-928358-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test_ha-928358_ha-928358-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp testdata/cp-test.txt ha-928358-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m02:/home/docker/cp-test.txt ha-928358:/home/docker/cp-test_ha-928358-m02_ha-928358.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test_ha-928358-m02_ha-928358.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m02:/home/docker/cp-test.txt ha-928358-m03:/home/docker/cp-test_ha-928358-m02_ha-928358-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test_ha-928358-m02_ha-928358-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m02:/home/docker/cp-test.txt ha-928358-m04:/home/docker/cp-test_ha-928358-m02_ha-928358-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test_ha-928358-m02_ha-928358-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp testdata/cp-test.txt ha-928358-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt ha-928358:/home/docker/cp-test_ha-928358-m03_ha-928358.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test_ha-928358-m03_ha-928358.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt ha-928358-m02:/home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test_ha-928358-m03_ha-928358-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m03:/home/docker/cp-test.txt ha-928358-m04:/home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test_ha-928358-m03_ha-928358-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp testdata/cp-test.txt ha-928358-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile460910791/001/cp-test_ha-928358-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt ha-928358:/home/docker/cp-test_ha-928358-m04_ha-928358.txt
E1028 11:15:50.863860  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358 "sudo cat /home/docker/cp-test_ha-928358-m04_ha-928358.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt ha-928358-m02:/home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m02 "sudo cat /home/docker/cp-test_ha-928358-m04_ha-928358-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 cp ha-928358-m04:/home/docker/cp-test.txt ha-928358-m03:/home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 ssh -n ha-928358-m03 "sudo cat /home/docker/cp-test_ha-928358-m04_ha-928358-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-928358 node delete m03 -v=7 --alsologtostderr: (15.953232685s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-928358 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 11:30:09.886377  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:31:32.951563  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:32:38.998505  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-928358 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.840294543s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-928358 --control-plane -v=7 --alsologtostderr
E1028 11:35:09.886405  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-928358 --control-plane -v=7 --alsologtostderr: (1m18.85839714s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-928358 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-163829 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-163829 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.878590836s)
--- PASS: TestJSONOutput/start/Command (56.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-163829 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-163829 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-163829 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-163829 --output=json --user=testUser: (7.366914818s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-662864 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-662864 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.873864ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7609392-664c-481e-9640-57f8ca1238ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-662864] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"07da4e64-5a8a-4645-82b9-996b042a3281","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19876"}}
	{"specversion":"1.0","id":"76deae6e-3722-4141-822d-4da53fef4f0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cccebec2-88da-49e2-8fed-a984ccc70358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig"}}
	{"specversion":"1.0","id":"69da2c1d-f68c-40ff-bc39-7a0cafa504f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube"}}
	{"specversion":"1.0","id":"3f51351d-7d29-4c59-9924-afd21965f5bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"09f4cba3-12d0-4cb3-bafa-6ace9b0844b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c48697c1-9bff-429d-b16c-62c86a6f8ab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-662864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-662864
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-295973 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-295973 --driver=kvm2  --container-runtime=crio: (42.574877052s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-319762 --driver=kvm2  --container-runtime=crio
E1028 11:37:38.998931  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-319762 --driver=kvm2  --container-runtime=crio: (42.848902653s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-295973
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-319762
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-319762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-319762
helpers_test.go:175: Cleaning up "first-295973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-295973
--- PASS: TestMinikubeProfile (88.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-933734 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-933734 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.355819488s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-933734 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-933734 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-949602 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-949602 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.952988211s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949602 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949602 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-933734 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949602 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949602 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-949602
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-949602: (1.382425903s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-949602
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-949602: (21.958366939s)
--- PASS: TestMountStart/serial/RestartStopped (22.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949602 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949602 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-450140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 11:40:09.886709  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 11:40:42.065907  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-450140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.554035128s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-450140 -- rollout status deployment/busybox: (4.659347542s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-g5nbd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-xwxzn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-g5nbd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-xwxzn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-g5nbd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-xwxzn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-g5nbd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-g5nbd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-xwxzn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-450140 -- exec busybox-7dff88458-xwxzn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-450140 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-450140 -v 3 --alsologtostderr: (52.501465315s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-450140 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp testdata/cp-test.txt multinode-450140:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile25711815/001/cp-test_multinode-450140.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140:/home/docker/cp-test.txt multinode-450140-m02:/home/docker/cp-test_multinode-450140_multinode-450140-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m02 "sudo cat /home/docker/cp-test_multinode-450140_multinode-450140-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140:/home/docker/cp-test.txt multinode-450140-m03:/home/docker/cp-test_multinode-450140_multinode-450140-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m03 "sudo cat /home/docker/cp-test_multinode-450140_multinode-450140-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp testdata/cp-test.txt multinode-450140-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile25711815/001/cp-test_multinode-450140-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt multinode-450140:/home/docker/cp-test_multinode-450140-m02_multinode-450140.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140 "sudo cat /home/docker/cp-test_multinode-450140-m02_multinode-450140.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140-m02:/home/docker/cp-test.txt multinode-450140-m03:/home/docker/cp-test_multinode-450140-m02_multinode-450140-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m03 "sudo cat /home/docker/cp-test_multinode-450140-m02_multinode-450140-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp testdata/cp-test.txt multinode-450140-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile25711815/001/cp-test_multinode-450140-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt multinode-450140:/home/docker/cp-test_multinode-450140-m03_multinode-450140.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140 "sudo cat /home/docker/cp-test_multinode-450140-m03_multinode-450140.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 cp multinode-450140-m03:/home/docker/cp-test.txt multinode-450140-m02:/home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 ssh -n multinode-450140-m02 "sudo cat /home/docker/cp-test_multinode-450140-m03_multinode-450140-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 node stop m03: (1.529384218s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-450140 status: exit status 7 (426.405947ms)

                                                
                                                
-- stdout --
	multinode-450140
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-450140-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-450140-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr: exit status 7 (424.773562ms)

                                                
                                                
-- stdout --
	multinode-450140
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-450140-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-450140-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 11:42:29.672470  168105 out.go:345] Setting OutFile to fd 1 ...
	I1028 11:42:29.672598  168105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:42:29.672609  168105 out.go:358] Setting ErrFile to fd 2...
	I1028 11:42:29.672616  168105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 11:42:29.672796  168105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 11:42:29.672987  168105 out.go:352] Setting JSON to false
	I1028 11:42:29.673025  168105 mustload.go:65] Loading cluster: multinode-450140
	I1028 11:42:29.673154  168105 notify.go:220] Checking for updates...
	I1028 11:42:29.673442  168105 config.go:182] Loaded profile config "multinode-450140": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 11:42:29.673465  168105 status.go:174] checking status of multinode-450140 ...
	I1028 11:42:29.673908  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:29.673985  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:29.691188  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I1028 11:42:29.691749  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:29.692307  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:29.692332  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:29.692678  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:29.692911  168105 main.go:141] libmachine: (multinode-450140) Calling .GetState
	I1028 11:42:29.694520  168105 status.go:371] multinode-450140 host status = "Running" (err=<nil>)
	I1028 11:42:29.694544  168105 host.go:66] Checking if "multinode-450140" exists ...
	I1028 11:42:29.694847  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:29.694910  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:29.711057  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I1028 11:42:29.711463  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:29.711945  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:29.711971  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:29.712310  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:29.712469  168105 main.go:141] libmachine: (multinode-450140) Calling .GetIP
	I1028 11:42:29.715193  168105 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:42:29.715563  168105 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:42:29.715598  168105 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:42:29.715760  168105 host.go:66] Checking if "multinode-450140" exists ...
	I1028 11:42:29.716064  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:29.716100  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:29.731839  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I1028 11:42:29.732281  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:29.732839  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:29.732861  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:29.733195  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:29.733409  168105 main.go:141] libmachine: (multinode-450140) Calling .DriverName
	I1028 11:42:29.733665  168105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:42:29.733697  168105 main.go:141] libmachine: (multinode-450140) Calling .GetSSHHostname
	I1028 11:42:29.736777  168105 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:42:29.737288  168105 main.go:141] libmachine: (multinode-450140) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:01:dd", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:39:42 +0000 UTC Type:0 Mac:52:54:00:1c:01:dd Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:multinode-450140 Clientid:01:52:54:00:1c:01:dd}
	I1028 11:42:29.737316  168105 main.go:141] libmachine: (multinode-450140) DBG | domain multinode-450140 has defined IP address 192.168.39.184 and MAC address 52:54:00:1c:01:dd in network mk-multinode-450140
	I1028 11:42:29.737605  168105 main.go:141] libmachine: (multinode-450140) Calling .GetSSHPort
	I1028 11:42:29.737864  168105 main.go:141] libmachine: (multinode-450140) Calling .GetSSHKeyPath
	I1028 11:42:29.738041  168105 main.go:141] libmachine: (multinode-450140) Calling .GetSSHUsername
	I1028 11:42:29.738192  168105 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140/id_rsa Username:docker}
	I1028 11:42:29.819906  168105 ssh_runner.go:195] Run: systemctl --version
	I1028 11:42:29.826353  168105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:42:29.842510  168105 kubeconfig.go:125] found "multinode-450140" server: "https://192.168.39.184:8443"
	I1028 11:42:29.842547  168105 api_server.go:166] Checking apiserver status ...
	I1028 11:42:29.842592  168105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1028 11:42:29.858451  168105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W1028 11:42:29.868708  168105 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1028 11:42:29.868763  168105 ssh_runner.go:195] Run: ls
	I1028 11:42:29.873452  168105 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I1028 11:42:29.877805  168105 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I1028 11:42:29.877843  168105 status.go:463] multinode-450140 apiserver status = Running (err=<nil>)
	I1028 11:42:29.877858  168105 status.go:176] multinode-450140 status: &{Name:multinode-450140 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:42:29.877899  168105 status.go:174] checking status of multinode-450140-m02 ...
	I1028 11:42:29.878365  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:29.878405  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:29.893741  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I1028 11:42:29.894208  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:29.894692  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:29.894714  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:29.895082  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:29.895258  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .GetState
	I1028 11:42:29.896812  168105 status.go:371] multinode-450140-m02 host status = "Running" (err=<nil>)
	I1028 11:42:29.896831  168105 host.go:66] Checking if "multinode-450140-m02" exists ...
	I1028 11:42:29.897187  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:29.897217  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:29.912441  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I1028 11:42:29.912904  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:29.913390  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:29.913412  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:29.913718  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:29.913906  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .GetIP
	I1028 11:42:29.916379  168105 main.go:141] libmachine: (multinode-450140-m02) DBG | domain multinode-450140-m02 has defined MAC address 52:54:00:e4:89:61 in network mk-multinode-450140
	I1028 11:42:29.916792  168105 main.go:141] libmachine: (multinode-450140-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:61", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:40:44 +0000 UTC Type:0 Mac:52:54:00:e4:89:61 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-450140-m02 Clientid:01:52:54:00:e4:89:61}
	I1028 11:42:29.916829  168105 main.go:141] libmachine: (multinode-450140-m02) DBG | domain multinode-450140-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:e4:89:61 in network mk-multinode-450140
	I1028 11:42:29.916930  168105 host.go:66] Checking if "multinode-450140-m02" exists ...
	I1028 11:42:29.917226  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:29.917250  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:29.932324  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I1028 11:42:29.932764  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:29.933288  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:29.933312  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:29.933635  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:29.933820  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .DriverName
	I1028 11:42:29.934002  168105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1028 11:42:29.934044  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .GetSSHHostname
	I1028 11:42:29.936483  168105 main.go:141] libmachine: (multinode-450140-m02) DBG | domain multinode-450140-m02 has defined MAC address 52:54:00:e4:89:61 in network mk-multinode-450140
	I1028 11:42:29.936961  168105 main.go:141] libmachine: (multinode-450140-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:89:61", ip: ""} in network mk-multinode-450140: {Iface:virbr1 ExpiryTime:2024-10-28 12:40:44 +0000 UTC Type:0 Mac:52:54:00:e4:89:61 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-450140-m02 Clientid:01:52:54:00:e4:89:61}
	I1028 11:42:29.936989  168105 main.go:141] libmachine: (multinode-450140-m02) DBG | domain multinode-450140-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:e4:89:61 in network mk-multinode-450140
	I1028 11:42:29.937174  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .GetSSHPort
	I1028 11:42:29.937344  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .GetSSHKeyPath
	I1028 11:42:29.937456  168105 main.go:141] libmachine: (multinode-450140-m02) Calling .GetSSHUsername
	I1028 11:42:29.937593  168105 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19876-132631/.minikube/machines/multinode-450140-m02/id_rsa Username:docker}
	I1028 11:42:30.016766  168105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1028 11:42:30.031880  168105 status.go:176] multinode-450140-m02 status: &{Name:multinode-450140-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1028 11:42:30.031923  168105 status.go:174] checking status of multinode-450140-m03 ...
	I1028 11:42:30.032312  168105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1028 11:42:30.032345  168105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1028 11:42:30.047595  168105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I1028 11:42:30.048021  168105 main.go:141] libmachine: () Calling .GetVersion
	I1028 11:42:30.048570  168105 main.go:141] libmachine: Using API Version  1
	I1028 11:42:30.048591  168105 main.go:141] libmachine: () Calling .SetConfigRaw
	I1028 11:42:30.048872  168105 main.go:141] libmachine: () Calling .GetMachineName
	I1028 11:42:30.049081  168105 main.go:141] libmachine: (multinode-450140-m03) Calling .GetState
	I1028 11:42:30.050460  168105 status.go:371] multinode-450140-m03 host status = "Stopped" (err=<nil>)
	I1028 11:42:30.050473  168105 status.go:384] host is not running, skipping remaining checks
	I1028 11:42:30.050478  168105 status.go:176] multinode-450140-m03 status: &{Name:multinode-450140-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 node start m03 -v=7 --alsologtostderr
E1028 11:42:38.998870  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 node start m03 -v=7 --alsologtostderr: (40.571525856s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-450140 node delete m03: (1.724015118s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-450140 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1028 11:52:38.999229  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-450140 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.895703087s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-450140 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-450140
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-450140-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-450140-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.611449ms)

                                                
                                                
-- stdout --
	* [multinode-450140-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-450140-m02' is duplicated with machine name 'multinode-450140-m02' in profile 'multinode-450140'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-450140-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-450140-m03 --driver=kvm2  --container-runtime=crio: (43.665981984s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-450140
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-450140: exit status 80 (214.985124ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-450140 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-450140-m03 already exists in multinode-450140-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-450140-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.99s)

                                                
                                    
x
+
TestScheduledStopUnix (113.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-869038 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-869038 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.393129774s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-869038 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-869038 -n scheduled-stop-869038
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-869038 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1028 11:58:42.908299  140303 retry.go:31] will retry after 142.74µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.909506  140303 retry.go:31] will retry after 165.903µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.910678  140303 retry.go:31] will retry after 142.664µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.911859  140303 retry.go:31] will retry after 224.494µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.913018  140303 retry.go:31] will retry after 577.925µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.914163  140303 retry.go:31] will retry after 961.236µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.915296  140303 retry.go:31] will retry after 688.938µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.916432  140303 retry.go:31] will retry after 950.268µs: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.917606  140303 retry.go:31] will retry after 2.787268ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.920849  140303 retry.go:31] will retry after 3.983563ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.925113  140303 retry.go:31] will retry after 3.723986ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.929453  140303 retry.go:31] will retry after 6.804415ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.936715  140303 retry.go:31] will retry after 11.525167ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.949008  140303 retry.go:31] will retry after 15.215559ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
I1028 11:58:42.965302  140303 retry.go:31] will retry after 30.921442ms: open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/scheduled-stop-869038/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-869038 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-869038 -n scheduled-stop-869038
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-869038
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-869038 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-869038
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-869038: exit status 7 (65.974511ms)

                                                
                                                
-- stdout --
	scheduled-stop-869038
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-869038 -n scheduled-stop-869038
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-869038 -n scheduled-stop-869038: exit status 7 (66.029855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-869038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-869038
--- PASS: TestScheduledStopUnix (113.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (192.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3402915103 start -p running-upgrade-628680 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3402915103 start -p running-upgrade-628680 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m44.697188427s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-628680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1028 12:02:38.998146  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-628680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.019651962s)
helpers_test.go:175: Cleaning up "running-upgrade-628680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-628680
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-628680: (2.266325068s)
--- PASS: TestRunningBinaryUpgrade (192.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.51s)

                                                
                                    
x
+
TestPause/serial/Start (85.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-729494 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-729494 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m25.198232877s)
--- PASS: TestPause/serial/Start (85.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (205s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.646461055 start -p stopped-upgrade-755815 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1028 12:00:09.886795  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.646461055 start -p stopped-upgrade-755815 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.102227034s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.646461055 -p stopped-upgrade-755815 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.646461055 -p stopped-upgrade-755815 stop: (12.308703914s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-755815 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-755815 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.589758839s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (205.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-903216 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-903216 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (109.112069ms)

                                                
                                                
-- stdout --
	* [false-903216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1028 12:01:53.756898  177181 out.go:345] Setting OutFile to fd 1 ...
	I1028 12:01:53.757136  177181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:01:53.757144  177181 out.go:358] Setting ErrFile to fd 2...
	I1028 12:01:53.757148  177181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1028 12:01:53.757353  177181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-132631/.minikube/bin
	I1028 12:01:53.757943  177181 out.go:352] Setting JSON to false
	I1028 12:01:53.758922  177181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6257,"bootTime":1730110657,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1028 12:01:53.759006  177181 start.go:139] virtualization: kvm guest
	I1028 12:01:53.761337  177181 out.go:177] * [false-903216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1028 12:01:53.762901  177181 out.go:177]   - MINIKUBE_LOCATION=19876
	I1028 12:01:53.762941  177181 notify.go:220] Checking for updates...
	I1028 12:01:53.765886  177181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1028 12:01:53.767242  177181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	I1028 12:01:53.768681  177181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	I1028 12:01:53.770230  177181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1028 12:01:53.771801  177181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1028 12:01:53.773797  177181 config.go:182] Loaded profile config "pause-729494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1028 12:01:53.773911  177181 config.go:182] Loaded profile config "running-upgrade-628680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:01:53.774002  177181 config.go:182] Loaded profile config "stopped-upgrade-755815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1028 12:01:53.774105  177181 driver.go:394] Setting default libvirt URI to qemu:///system
	I1028 12:01:53.811179  177181 out.go:177] * Using the kvm2 driver based on user configuration
	I1028 12:01:53.812835  177181 start.go:297] selected driver: kvm2
	I1028 12:01:53.812853  177181 start.go:901] validating driver "kvm2" against <nil>
	I1028 12:01:53.812865  177181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1028 12:01:53.815133  177181 out.go:201] 
	W1028 12:01:53.816665  177181 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1028 12:01:53.818196  177181 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-903216 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-903216" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:01:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.55:8443
name: pause-729494
contexts:
- context:
cluster: pause-729494
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:01:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-729494
name: pause-729494
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-729494
user:
client-certificate: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.crt
client-key: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-903216

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-903216"

                                                
                                                
----------------------- debugLogs end: false-903216 [took: 2.952414983s] --------------------------------
helpers_test.go:175: Cleaning up "false-903216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-903216
--- PASS: TestNetworkPlugins/group/false (3.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606176 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-606176 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (65.768463ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-606176] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19876
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19876-132631/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-132631/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (54.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606176 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606176 --driver=kvm2  --container-runtime=crio: (53.929425389s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-606176 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (54.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606176 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606176 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.172215844s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-606176 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-606176 status -o json: exit status 2 (243.998627ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-606176","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-606176
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-606176: (1.026734495s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-755815
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-755815: (1.039179424s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606176 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606176 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.200484465s)
--- PASS: TestNoKubernetes/serial/Start (49.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-606176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-606176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.330929ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-606176
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-606176: (1.347572545s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-606176 --driver=kvm2  --container-runtime=crio
E1028 12:04:52.956834  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-606176 --driver=kvm2  --container-runtime=crio: (42.903335355s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-606176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-606176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.179577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-871884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-871884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m53.070064377s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-871884 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6477bdaa-a202-4792-8bac-8a62b685f645] Pending
helpers_test.go:344: "busybox" [6477bdaa-a202-4792-8bac-8a62b685f645] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6477bdaa-a202-4792-8bac-8a62b685f645] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.06317649s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-871884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-709250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-709250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (59.009708493s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-871884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-871884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021611897s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-871884 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-349222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-349222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m31.712158039s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-709250 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b4fe638-16cb-4b05-84af-dd2fec47b9e3] Pending
helpers_test.go:344: "busybox" [5b4fe638-16cb-4b05-84af-dd2fec47b9e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b4fe638-16cb-4b05-84af-dd2fec47b9e3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004010164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-709250 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-709250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-709250 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c05cecfd-f2f1-460d-b2d2-9fab93bcb9b2] Pending
helpers_test.go:344: "busybox" [c05cecfd-f2f1-460d-b2d2-9fab93bcb9b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c05cecfd-f2f1-460d-b2d2-9fab93bcb9b2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004525912s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-349222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-349222 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (650.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-871884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-871884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m50.228034784s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-871884 -n no-preload-871884
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (650.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (552.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-709250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-709250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m12.158505526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-709250 -n embed-certs-709250
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (552.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-089993 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-089993 --alsologtostderr -v=3: (3.291098887s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-089993 -n old-k8s-version-089993: exit status 7 (65.037314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-089993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (511.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-349222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1028 12:14:02.071043  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:15:09.886797  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:17:38.998153  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/addons-892779/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:20:09.887105  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-349222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (8m31.013618932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-349222 -n default-k8s-diff-port-349222
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (511.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-604556 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-604556 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (48.227789994s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m22.15927839s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-604556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-604556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.406635625s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-604556 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-604556 --alsologtostderr -v=3: (11.38974581s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-604556 -n newest-cni-604556
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-604556 -n newest-cni-604556: exit status 7 (69.532145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-604556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-604556 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-604556 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (44.06966125s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-604556 -n newest-cni-604556
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-604556 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-604556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-604556 -n newest-cni-604556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-604556 -n newest-cni-604556: exit status 2 (262.081114ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-604556 -n newest-cni-604556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-604556 -n newest-cni-604556: exit status 2 (258.227665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-604556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-604556 -n newest-cni-604556
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-604556 -n newest-cni-604556
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m2.487017512s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-903216 "pgrep -a kubelet"
I1028 12:37:25.043331  140303 config.go:182] Loaded profile config "auto-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hmlnq" [d2a20e22-9df2-46b5-9777-4a2e2a18dad9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hmlnq" [d2a20e22-9df2-46b5-9777-4a2e2a18dad9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004708624s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m22.249495469s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (102.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1028 12:37:57.510007  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:38:02.632024  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m42.103410947s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (102.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dqqj2" [735413f4-7989-4174-b8b5-92395b8174ac] Running
E1028 12:38:12.873994  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:38:12.960551  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/functional-452974/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003983637s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-903216 "pgrep -a kubelet"
I1028 12:38:18.717465  140303 config.go:182] Loaded profile config "kindnet-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9htbf" [fc7be206-a876-4209-9fc9-b7806ff09b18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9htbf" [fc7be206-a876-4209-9fc9-b7806ff09b18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004704165s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (56.287049671s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (100.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1028 12:39:05.501610  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:05.508056  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:05.519490  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:05.540912  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:05.582395  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:05.663951  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:05.825572  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:06.147355  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:06.788733  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:08.070729  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:10.632556  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:14.317405  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
E1028 12:39:15.754838  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m40.059520418s)
--- PASS: TestNetworkPlugins/group/flannel/Start (100.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2sv8w" [c8746918-577d-4ccd-949c-f769de49e8d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.118107426s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-903216 "pgrep -a kubelet"
I1028 12:39:22.301497  140303 config.go:182] Loaded profile config "calico-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-903216 replace --force -f testdata/netcat-deployment.yaml: (1.220776351s)
I1028 12:39:23.528634  140303 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1028 12:39:23.558528  140303 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hd457" [69d6b835-a1e6-45f9-a770-8d8ede75080d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 12:39:25.996384  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hd457" [69d6b835-a1e6-45f9-a770-8d8ede75080d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005237434s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-903216 "pgrep -a kubelet"
I1028 12:39:39.020039  140303 config.go:182] Loaded profile config "custom-flannel-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bsknq" [5e8e0019-802c-4452-8779-9e6785587142] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bsknq" [5e8e0019-802c-4452-8779-9e6785587142] Running
E1028 12:39:46.477748  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/old-k8s-version-089993/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006342878s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-903216 "pgrep -a kubelet"
I1028 12:39:43.012603  140303 config.go:182] Loaded profile config "enable-default-cni-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vwqzc" [154724c4-4871-42b4-af89-0e48d0a281bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vwqzc" [154724c4-4871-42b4-af89-0e48d0a281bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005393642s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-903216 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m34.791761303s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cvvzv" [bafa0a74-3c68-49d0-af2a-0b86385a68f2] Running
E1028 12:40:36.239176  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/no-preload-871884/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003981844s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-903216 "pgrep -a kubelet"
I1028 12:40:38.887418  140303 config.go:182] Loaded profile config "flannel-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vvczh" [5ea555f8-ac7b-41c3-8f52-6ef8174f9510] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1028 12:40:40.292694  140303 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/default-k8s-diff-port-349222/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vvczh" [5ea555f8-ac7b-41c3-8f52-6ef8174f9510] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004227065s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-903216 "pgrep -a kubelet"
I1028 12:41:30.126218  140303 config.go:182] Loaded profile config "bridge-903216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-903216 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rglqn" [456e0ab5-b60f-4f66-a549-5997314b8abb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rglqn" [456e0ab5-b60f-4f66-a549-5997314b8abb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004465881s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-903216 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-903216 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.15
266 TestNetworkPlugins/group/kubenet 6.09
274 TestNetworkPlugins/group/cilium 3.65
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-892779 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-219559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-219559
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-903216 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-903216" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:01:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.55:8443
name: pause-729494
contexts:
- context:
cluster: pause-729494
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:01:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-729494
name: pause-729494
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-729494
user:
client-certificate: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.crt
client-key: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-903216

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-903216"

                                                
                                                
----------------------- debugLogs end: kubenet-903216 [took: 5.937045061s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-903216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-903216
--- SKIP: TestNetworkPlugins/group/kubenet (6.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-903216 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-903216" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19876-132631/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:01:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.55:8443
name: pause-729494
contexts:
- context:
cluster: pause-729494
extensions:
- extension:
last-update: Mon, 28 Oct 2024 12:01:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-729494
name: pause-729494
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-729494
user:
client-certificate: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.crt
client-key: /home/jenkins/minikube-integration/19876-132631/.minikube/profiles/pause-729494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-903216

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-903216" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-903216"

                                                
                                                
----------------------- debugLogs end: cilium-903216 [took: 3.498340175s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-903216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-903216
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
Copied to clipboard